PR | ialidzhikov: Automated cherry pick of #109836: Fix OpenAPI loading error caused by empty APIService |
Result | FAILURE |
Tests | 2 failed / 482 succeeded |
Started | |
Elapsed | 19m15s |
Revision | |
Builder | 4a0bfe1f-cf91-11ec-bd1a-9e189fdc235a |
Refs |
release-1.21:e74362de 109898:b19dceb5 |
infra-commit | 6d2aa3e00 |
job-version | v1.21.13-rc.0.16+990a7f2e092094 |
kubetest-version | |
repo | k8s.io/kubernetes |
repo-commit | 990a7f2e09209404064954baff8c059ae23a5a24 |
repos | {u'k8s.io/kubernetes': u'release-1.21:e74362de8497d8e34a4abd51a1b6eca21229820f,109898:b19dceb5aa3a4461af94d7408f4332f43f81421a'} |
revision | v1.21.13-rc.0.16+990a7f2e092094 |
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=E2eNode\sSuite\s\[sig\-node\]\sMirrorPodWithGracePeriod\swhen\screate\sa\smirror\spod\s\smirror\spod\stermination\sshould\ssatisfy\sgrace\speriod\swhen\sstatic\spod\sis\sdeleted\s\[NodeConformance\]$'
_output/local/go/src/k8s.io/kubernetes/test/e2e_node/mirror_pod_grace_period_test.go:54 Unexpected error: <*errors.StatusError | 0xc0003a4a00>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: { SelfLink: "", ResourceVersion: "", Continue: "", RemainingItemCount: nil, }, Status: "Failure", Message: "pods \"graceful-pod-eaf1a6b6-47af-4d7f-861d-9028dfc2612f-tmp-node-e2e-253c3c08-cos-89-16108-659-8\" not found", Reason: "NotFound", Details: { Name: "graceful-pod-eaf1a6b6-47af-4d7f-861d-9028dfc2612f-tmp-node-e2e-253c3c08-cos-89-16108-659-8", Group: "", Kind: "pods", UID: "", Causes: nil, RetryAfterSeconds: 0, }, Code: 404, }, } pods "graceful-pod-eaf1a6b6-47af-4d7f-861d-9028dfc2612f-tmp-node-e2e-253c3c08-cos-89-16108-659-8" not found occurred _output/local/go/src/k8s.io/kubernetes/test/e2e_node/mirror_pod_grace_period_test.go:72from junit_cos-stable1_03.xml
[BeforeEach] [sig-node] MirrorPodWithGracePeriod /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client �[1mSTEP�[0m: Building a namespace api object, basename mirror-pod-with-grace-period May 9 12:31:58.304: INFO: Skipping waiting for service account [BeforeEach] when create a mirror pod _output/local/go/src/k8s.io/kubernetes/test/e2e_node/mirror_pod_grace_period_test.go:37 �[1mSTEP�[0m: create the static pod May 9 12:31:58.304: INFO: has written /tmp/node-e2e-20220509T122354/static-pods382474590/mirror-pod-with-grace-period-6062-graceful-pod-eaf1a6b6-47af-4d7f-861d-9028dfc2612f.yaml �[1mSTEP�[0m: wait for the mirror pod to be running [It] mirror pod termination should satisfy grace period when static pod is deleted [NodeConformance] _output/local/go/src/k8s.io/kubernetes/test/e2e_node/mirror_pod_grace_period_test.go:54 �[1mSTEP�[0m: get mirror pod uid �[1mSTEP�[0m: delete the static pod May 9 12:32:02.364: INFO: deleting static pod manifest "/tmp/node-e2e-20220509T122354/static-pods382474590/mirror-pod-with-grace-period-6062-graceful-pod-eaf1a6b6-47af-4d7f-861d-9028dfc2612f.yaml" [AfterEach] when create a mirror pod _output/local/go/src/k8s.io/kubernetes/test/e2e_node/mirror_pod_grace_period_test.go:81 �[1mSTEP�[0m: wait for the mirror pod to disappear [AfterEach] [sig-node] MirrorPodWithGracePeriod /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 �[1mSTEP�[0m: Collecting events from namespace "mirror-pod-with-grace-period-6062". �[1mSTEP�[0m: Found 5 events. May 9 12:32:03.406: INFO: At 2022-05-09 12:31:58 +0000 UTC - event for graceful-pod-eaf1a6b6-47af-4d7f-861d-9028dfc2612f-tmp-node-e2e-253c3c08-cos-89-16108-659-8: {kubelet tmp-node-e2e-253c3c08-cos-89-16108-659-8} MissingClusterDNS: pod: "graceful-pod-eaf1a6b6-47af-4d7f-861d-9028dfc2612f-tmp-node-e2e-253c3c08-cos-89-16108-659-8_mirror-pod-with-grace-period-6062(034038461ed55ba5869fdaaff36e0752)". kubelet does not have ClusterDNS IP configured and cannot create Pod using "ClusterFirst" policy. Falling back to "Default" policy. May 9 12:32:03.406: INFO: At 2022-05-09 12:31:59 +0000 UTC - event for graceful-pod-eaf1a6b6-47af-4d7f-861d-9028dfc2612f-tmp-node-e2e-253c3c08-cos-89-16108-659-8: {kubelet tmp-node-e2e-253c3c08-cos-89-16108-659-8} Pulling: Pulling image "busybox:1.31.1" May 9 12:32:03.406: INFO: At 2022-05-09 12:32:00 +0000 UTC - event for graceful-pod-eaf1a6b6-47af-4d7f-861d-9028dfc2612f-tmp-node-e2e-253c3c08-cos-89-16108-659-8: {kubelet tmp-node-e2e-253c3c08-cos-89-16108-659-8} Pulled: Successfully pulled image "busybox:1.31.1" in 1.15922092s May 9 12:32:03.406: INFO: At 2022-05-09 12:32:01 +0000 UTC - event for graceful-pod-eaf1a6b6-47af-4d7f-861d-9028dfc2612f-tmp-node-e2e-253c3c08-cos-89-16108-659-8: {kubelet tmp-node-e2e-253c3c08-cos-89-16108-659-8} Created: Created container m-test May 9 12:32:03.406: INFO: At 2022-05-09 12:32:01 +0000 UTC - event for graceful-pod-eaf1a6b6-47af-4d7f-861d-9028dfc2612f-tmp-node-e2e-253c3c08-cos-89-16108-659-8: {kubelet tmp-node-e2e-253c3c08-cos-89-16108-659-8} Started: Started container m-test May 9 12:32:03.411: INFO: POD NODE PHASE GRACE CONDITIONS May 9 12:32:03.411: INFO: May 9 12:32:03.413: INFO: Logging node info for node tmp-node-e2e-253c3c08-cos-89-16108-659-8 May 9 12:32:03.416: INFO: Node Info: &Node{ObjectMeta:{tmp-node-e2e-253c3c08-cos-89-16108-659-8 6fb4dede-50cb-4541-97d1-9ce42d79fdea 2751 0 2022-05-09 12:26:07 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:tmp-node-e2e-253c3c08-cos-89-16108-659-8 kubernetes.io/os:linux] map[volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-05-09 12:31:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:config":{},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16684785664 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3859939328 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{15016307073 0} {<nil>} 15016307073 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3597795328 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-09 12:31:12 +0000 UTC,LastTransitionTime:2022-05-09 12:26:07 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-09 12:31:12 +0000 UTC,LastTransitionTime:2022-05-09 12:26:07 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-09 12:31:12 +0000 UTC,LastTransitionTime:2022-05-09 12:26:07 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-09 12:31:12 +0000 UTC,LastTransitionTime:2022-05-09 12:26:17 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.38,},NodeAddress{Type:Hostname,Address:tmp-node-e2e-253c3c08-cos-89-16108-659-8,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:525b3555b01a7e68e6e7d1c963229ff4,SystemUUID:525b3555-b01a-7e68-e6e7-d1c963229ff4,BootID:4d7a4c95-421b-4775-af1c-605a7de76f37,KernelVersion:5.4.188+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:docker://20.10.3,KubeletVersion:v1.21.13-rc.0.16+990a7f2e092094,KubeProxyVersion:v1.21.13-rc.0.16+990a7f2e092094,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/e2e-test-images/node-perf/tf-wide-deep@sha256:d5d5822ef70f81db66c1271662e1b9d4556fb267ac7ae09dee5d91aa10736431 k8s.gcr.io/e2e-test-images/node-perf/tf-wide-deep:1.1],SizeBytes:1631162940,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/perl@sha256:c613344cdd31c5055961b078f831ef9d9199fc9111efe6e81bea3f00d78bd979 k8s.gcr.io/e2e-test-images/perl:5.26],SizeBytes:853285759,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/volume/gluster@sha256:660af738347dd94cdd8069647136c84f11d03fc6dde3af0e746b302d3dfd10ec k8s.gcr.io/e2e-test-images/volume/gluster:1.2],SizeBytes:340335659,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/volume/nfs@sha256:124a375b4f930627c65b2f84c0d0f09229a96bc527eec18ad0eeac150b96d1c2 k8s.gcr.io/e2e-test-images/volume/nfs:1.2],SizeBytes:263881150,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/node-problem-detector@sha256:6e9b4a4eaa47f120be61f60573a545844de63401661812e2cfb7ae81a28efd19 k8s.gcr.io/node-problem-detector:v0.6.2],SizeBytes:98707739,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/node-perf/npb-is@sha256:4d0c0cef373fba0752721552f8d7a478156c255c8dbf90522165784e790f1ab7 k8s.gcr.io/e2e-test-images/node-perf/npb-is:1.1],SizeBytes:96394902,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/node-perf/npb-ep@sha256:ac7a746f351635663abb0c240c0af71b229d1e321e478664c7816de4f4176818 k8s.gcr.io/e2e-test-images/node-perf/npb-ep:1.1],SizeBytes:96393102,},ContainerImage{Names:[google/cadvisor@sha256:815386ebbe9a3490f38785ab11bda34ec8dacf4634af77b8912832d4f85dca04 google/cadvisor:latest],SizeBytes:69583040,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:42321438,},ContainerImage{Names:[nfvpe/sriov-device-plugin@sha256:518499ed631ff84b43153b8f7624c1aaacb75a721038857509fe690abdf62ddb nfvpe/sriov-device-plugin:v3.1],SizeBytes:25318421,},ContainerImage{Names:[k8s.gcr.io/nvidia-gpu-device-plugin@sha256:4b036e8844920336fa48f36edeb7d4398f426d6a934ba022848deed2edbf09aa],SizeBytes:18981551,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/ipc-utils@sha256:06e2eb28e041f114941fba36b83f40c313f58a29d8b60777bde1fc4650e0b4f2 k8s.gcr.io/e2e-test-images/ipc-utils:1.2],SizeBytes:10039660,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7107254,},ContainerImage{Names:[k8s.gcr.io/stress@sha256:f00aa1ddc963a3164aef741aab0fc05074ea96de6cd7e0d10077cf98dd72d594 k8s.gcr.io/stress:v1],SizeBytes:5494760,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:4206620,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[k8s.gcr.io/busybox@sha256:4bdd623e848417d96127e16037743f0cd8b528c026e9175e22a84f639eca58ff],SizeBytes:1113554,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:&NodeConfigStatus{Assigned:nil,Active:nil,LastKnownGood:nil,Error:,},},} May 9 12:32:03.416: INFO: Logging kubelet events for node tmp-node-e2e-253c3c08-cos-89-16108-659-8 May 9 12:32:03.419: INFO: Logging pods the kubelet thinks is on node tmp-node-e2e-253c3c08-cos-89-16108-659-8 May 9 12:32:03.425: INFO: pod-with-prestop-http-hook started at 2022-05-09 12:31:53 +0000 UTC (0+1 container statuses recorded) May 9 12:32:03.425: INFO: Container pod-with-prestop-http-hook ready: true, restart count 0 May 9 12:32:03.425: INFO: privileged-pod started at 2022-05-09 12:31:26 +0000 UTC (0+2 container statuses recorded) May 9 12:32:03.425: INFO: Container not-privileged-container ready: true, restart count 0 May 9 12:32:03.425: INFO: Container privileged-container ready: true, restart count 0 May 9 12:32:03.425: INFO: busybox-readonly-false-3486c523-fefb-44b2-8b0b-c9f8f49aa85e started at 2022-05-09 12:32:00 +0000 UTC (0+1 container statuses recorded) May 9 12:32:03.425: INFO: Container busybox-readonly-false-3486c523-fefb-44b2-8b0b-c9f8f49aa85e ready: false, restart count 0 May 9 12:32:03.425: INFO: test-webserver-40197c62-4780-4455-b487-1a598a22c9cd started at 2022-05-09 12:32:01 +0000 UTC (0+1 container statuses recorded) May 9 12:32:03.425: INFO: Container test-webserver ready: false, restart count 0 May 9 12:32:03.425: INFO: busybox-privileged-false-8d384908-0670-4f0d-93ca-2007ac9bd781 started at 2022-05-09 12:31:54 +0000 UTC (0+1 container statuses recorded) May 9 12:32:03.425: INFO: Container busybox-privileged-false-8d384908-0670-4f0d-93ca-2007ac9bd781 ready: false, restart count 0 May 9 12:32:03.425: INFO: pod-projected-secrets-32184833-3795-4a30-b41e-6ba439bc045c started at 2022-05-09 12:31:52 +0000 UTC (0+3 container statuses recorded) May 9 12:32:03.425: INFO: Container creates-volume-test ready: true, restart count 0 May 9 12:32:03.425: INFO: Container dels-volume-test ready: true, restart count 0 May 9 12:32:03.425: INFO: Container upds-volume-test ready: true, restart count 0 May 9 12:32:03.425: INFO: image-pull-teste6e733fb-3e30-4dcd-a405-f0f624976b79 started at 2022-05-09 12:32:01 +0000 UTC (0+1 container statuses recorded) May 9 12:32:03.425: INFO: Container image-pull-test ready: false, restart count 0 May 9 12:32:03.425: INFO: test-webserver-8bac40ec-edfe-4e0a-a2ae-44158c5fd7fa started at 2022-05-09 12:28:17 +0000 UTC (0+1 container statuses recorded) May 9 12:32:03.425: INFO: Container test-webserver ready: true, restart count 0 May 9 12:32:03.425: INFO: busybox-readonly-fs4ca0d520-c097-48fe-94cb-bbc583cc7918 started at 2022-05-09 12:31:28 +0000 UTC (0+1 container statuses recorded) May 9 12:32:03.425: INFO: Container busybox-readonly-fs4ca0d520-c097-48fe-94cb-bbc583cc7918 ready: true, restart count 0 May 9 12:32:03.425: INFO: busybox-04685439-bd9a-4105-bd9b-454c1c7eebf7 started at 2022-05-09 12:28:59 +0000 UTC (0+1 container statuses recorded) May 9 12:32:03.426: INFO: Container busybox ready: true, restart count 0 May 9 12:32:03.426: INFO: pod-handle-http-request started at 2022-05-09 12:31:45 +0000 UTC (0+1 container statuses recorded) May 9 12:32:03.426: INFO: Container agnhost-container ready: true, restart count 0 W0509 12:32:03.428324 983 metrics_grabber.go:102] Can't find any pods in namespace kube-system to grab metrics from May 9 12:32:03.918: INFO: Latency metrics for node tmp-node-e2e-253c3c08-cos-89-16108-659-8 May 9 12:32:03.918: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "mirror-pod-with-grace-period-6062" for this suite.
Filter through log files | View test history on testgrid
error during go run /go/src/k8s.io/kubernetes/test/e2e_node/runner/remote/run_remote.go --cleanup --logtostderr --vmodule=*=4 --ssh-env=gce --results-dir=/workspace/_artifacts --project=k8s-infra-e2e-boskos-100 --zone=us-west1-b --ssh-user=prow --ssh-key=/workspace/.ssh/google_compute_engine --ginkgo-flags=--nodes=8 --focus="\[NodeConformance\]" --skip="\[Flaky\]|\[Slow\]|\[Serial\]" --test_args=--kubelet-flags="--cgroups-per-qos=true --cgroup-root=/" --test-timeout=1h5m0s --image-config-file=/workspace/test-infra/jobs/e2e_node/dockershim/image-config.yaml: exit status 1
from junit_runner.xml
Filter through log files | View test history on testgrid
E2eNode Suite [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]
E2eNode Suite [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]
E2eNode Suite [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]
E2eNode Suite [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]
E2eNode Suite [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]
E2eNode Suite [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]
E2eNode Suite [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]
E2eNode Suite [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]
E2eNode Suite [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]
E2eNode Suite [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]
E2eNode Suite [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]
E2eNode Suite [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Container Runtime Conformance Test container runtime conformance blackbox test when running a container with a new image should be able to pull from private registry with credential provider [NodeConformance]
E2eNode Suite [sig-node] Container Runtime Conformance Test container runtime conformance blackbox test when running a container with a new image should be able to pull from private registry with credential provider [NodeConformance]
E2eNode Suite [sig-node] Container Runtime Conformance Test container runtime conformance blackbox test when running a container with a new image should be able to pull from private registry with credential provider [NodeConformance]
E2eNode Suite [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set [NodeConformance]
E2eNode Suite [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set [NodeConformance]
E2eNode Suite [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set [NodeConformance]
E2eNode Suite [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Container Runtime blackbox test when running a container with a new image should be able to pull from private registry with secret [NodeConformance]
E2eNode Suite [sig-node] Container Runtime blackbox test when running a container with a new image should be able to pull from private registry with secret [NodeConformance]
E2eNode Suite [sig-node] Container Runtime blackbox test when running a container with a new image should be able to pull from private registry with secret [NodeConformance]
E2eNode Suite [sig-node] Container Runtime blackbox test when running a container with a new image should be able to pull image [NodeConformance]
E2eNode Suite [sig-node] Container Runtime blackbox test when running a container with a new image should be able to pull image [NodeConformance]
E2eNode Suite [sig-node] Container Runtime blackbox test when running a container with a new image should be able to pull image [NodeConformance]
E2eNode Suite [sig-node] Container Runtime blackbox test when running a container with a new image should not be able to pull from private registry without secret [NodeConformance]
E2eNode Suite [sig-node] Container Runtime blackbox test when running a container with a new image should not be able to pull from private registry without secret [NodeConformance]
E2eNode Suite [sig-node] Container Runtime blackbox test when running a container with a new image should not be able to pull from private registry without secret [NodeConformance]
E2eNode Suite [sig-node] Container Runtime blackbox test when running a container with a new image should not be able to pull image from invalid registry [NodeConformance]
E2eNode Suite [sig-node] Container Runtime blackbox test when running a container with a new image should not be able to pull image from invalid registry [NodeConformance]
E2eNode Suite [sig-node] Container Runtime blackbox test when running a container with a new image should not be able to pull image from invalid registry [NodeConformance]
E2eNode Suite [sig-node] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]
E2eNode Suite [sig-node] ContainerLogPath [NodeConformance] Pod with a container printed log to stdout should print log to correct cri log path
E2eNode Suite [sig-node] ContainerLogPath [NodeConformance] Pod with a container printed log to stdout should print log to correct cri log path
E2eNode Suite [sig-node] ContainerLogPath [NodeConformance] Pod with a container printed log to stdout should print log to correct cri log path
E2eNode Suite [sig-node] ContainerLogPath [NodeConformance] Pod with a container printed log to stdout should print log to correct log path
E2eNode Suite [sig-node] ContainerLogPath [NodeConformance] Pod with a container printed log to stdout should print log to correct log path
E2eNode Suite [sig-node] ContainerLogPath [NodeConformance] Pod with a container printed log to stdout should print log to correct log path
E2eNode Suite [sig-node] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
E2eNode Suite [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]
E2eNode Suite [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]
E2eNode Suite [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]
E2eNode Suite [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]
E2eNode Suite [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]
E2eNode Suite [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]
E2eNode Suite [sig-node] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
E2eNode Suite [sig-node] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
E2eNode Suite [sig-node] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
E2eNode Suite [sig-node] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
E2eNode Suite [sig-node] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
E2eNode Suite [sig-node] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
E2eNode Suite [sig-node] Kubelet Cgroup Manager Pod containers [NodeConformance] On scheduling a BestEffort Pod Pod containers should have been created under the BestEffort cgroup
E2eNode Suite [sig-node] Kubelet Cgroup Manager Pod containers [NodeConformance] On scheduling a BestEffort Pod Pod containers should have been created under the BestEffort cgroup
E2eNode Suite [sig-node] Kubelet Cgroup Manager Pod containers [NodeConformance] On scheduling a BestEffort Pod Pod containers should have been created under the BestEffort cgroup
E2eNode Suite [sig-node] Kubelet Cgroup Manager Pod containers [NodeConformance] On scheduling a Burstable Pod Pod containers should have been created under the Burstable cgroup
E2eNode Suite [sig-node] Kubelet Cgroup Manager Pod containers [NodeConformance] On scheduling a Burstable Pod Pod containers should have been created under the Burstable cgroup
E2eNode Suite [sig-node] Kubelet Cgroup Manager Pod containers [NodeConformance] On scheduling a Burstable Pod Pod containers should have been created under the Burstable cgroup
E2eNode Suite [sig-node] Kubelet Cgroup Manager Pod containers [NodeConformance] On scheduling a Guaranteed Pod Pod containers should have been created under the cgroup-root
E2eNode Suite [sig-node] Kubelet Cgroup Manager Pod containers [NodeConformance] On scheduling a Guaranteed Pod Pod containers should have been created under the cgroup-root
E2eNode Suite [sig-node] Kubelet Cgroup Manager Pod containers [NodeConformance] On scheduling a Guaranteed Pod Pod containers should have been created under the cgroup-root
E2eNode Suite [sig-node] Kubelet Cgroup Manager QOS containers On enabling QOS cgroup hierarchy Top level QoS containers should have been created [NodeConformance]
E2eNode Suite [sig-node] Kubelet Cgroup Manager QOS containers On enabling QOS cgroup hierarchy Top level QoS containers should have been created [NodeConformance]
E2eNode Suite [sig-node] Kubelet Cgroup Manager QOS containers On enabling QOS cgroup hierarchy Top level QoS containers should have been created [NodeConformance]
E2eNode Suite [sig-node] Kubelet Volume Manager Volume Manager On termination of pod with memory backed volume should remove the volume from the node [NodeConformance]
E2eNode Suite [sig-node] Kubelet Volume Manager Volume Manager On termination of pod with memory backed volume should remove the volume from the node [NodeConformance]
E2eNode Suite [sig-node] Kubelet Volume Manager Volume Manager On termination of pod with memory backed volume should remove the volume from the node [NodeConformance]
E2eNode Suite [sig-node] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-node] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-node] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-node] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-node] MirrorPod when create a mirror pod should be recreated when mirror pod forcibly deleted [NodeConformance]
E2eNode Suite [sig-node] MirrorPod when create a mirror pod should be recreated when mirror pod forcibly deleted [NodeConformance]
E2eNode Suite [sig-node] MirrorPod when create a mirror pod should be recreated when mirror pod forcibly deleted [NodeConformance]
E2eNode Suite [sig-node] MirrorPod when create a mirror pod should be recreated when mirror pod gracefully deleted [NodeConformance]
E2eNode Suite [sig-node] MirrorPod when create a mirror pod should be recreated when mirror pod gracefully deleted [NodeConformance]
E2eNode Suite [sig-node] MirrorPod when create a mirror pod should be recreated when mirror pod gracefully deleted [NodeConformance]
E2eNode Suite [sig-node] MirrorPod when create a mirror pod should be updated when static pod updated [NodeConformance]
E2eNode Suite [sig-node] MirrorPod when create a mirror pod should be updated when static pod updated [NodeConformance]
E2eNode Suite [sig-node] MirrorPod when create a mirror pod should be updated when static pod updated [NodeConformance]
E2eNode Suite [sig-node] MirrorPodWithGracePeriod when create a mirror pod mirror pod termination should satisfy grace period when static pod is deleted [NodeConformance]
E2eNode Suite [sig-node] MirrorPodWithGracePeriod when create a mirror pod mirror pod termination should satisfy grace period when static pod is deleted [NodeConformance]
E2eNode Suite [sig-node] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Pods should be submitted and removed [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Pods should be submitted and removed [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Pods should be submitted and removed [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Pods should be updated [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Pods should be updated [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Pods should be updated [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Pods should contain environment variables for services [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Pods should contain environment variables for services [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Pods should contain environment variables for services [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Pods should get a host IP [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Pods should get a host IP [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Pods should get a host IP [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Pods should support remote command execution over websockets [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Pods should support remote command execution over websockets [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Pods should support remote command execution over websockets [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
E2eNode Suite [sig-node] PrivilegedPod [NodeConformance] should enable privileged commands [LinuxOnly]
E2eNode Suite [sig-node] PrivilegedPod [NodeConformance] should enable privileged commands [LinuxOnly]
E2eNode Suite [sig-node] PrivilegedPod [NodeConformance] should enable privileged commands [LinuxOnly]
E2eNode Suite [sig-node] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Probing container should be restarted with an exec liveness probe with timeout [MinimumKubeletVersion:1.20] [NodeConformance]
E2eNode Suite [sig-node] Probing container should be restarted with an exec liveness probe with timeout [MinimumKubeletVersion:1.20] [NodeConformance]
E2eNode Suite [sig-node] Probing container should be restarted with an exec liveness probe with timeout [MinimumKubeletVersion:1.20] [NodeConformance]
E2eNode Suite [sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Probing container should not be ready with an exec readiness probe timeout [MinimumKubeletVersion:1.20] [NodeConformance]
E2eNode Suite [sig-node] Probing container should not be ready with an exec readiness probe timeout [MinimumKubeletVersion:1.20] [NodeConformance]
E2eNode Suite [sig-node] Probing container should not be ready with an exec readiness probe timeout [MinimumKubeletVersion:1.20] [NodeConformance]
E2eNode Suite [sig-node] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Secrets should be consumable via the environment [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Secrets should be consumable via the environment [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Secrets should be consumable via the environment [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Security Context When creating a container with runAsUser should run the container with uid 0 [LinuxOnly] [NodeConformance]
E2eNode Suite [sig-node] Security Context When creating a container with runAsUser should run the container with uid 0 [LinuxOnly] [NodeConformance]
E2eNode Suite [sig-node] Security Context When creating a container with runAsUser should run the container with uid 0 [LinuxOnly] [NodeConformance]
E2eNode Suite [sig-node] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Security Context When creating a pod with readOnlyRootFilesystem should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance]
E2eNode Suite [sig-node] Security Context When creating a pod with readOnlyRootFilesystem should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance]
E2eNode Suite [sig-node] Security Context When creating a pod with readOnlyRootFilesystem should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance]
E2eNode Suite [sig-node] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance]
E2eNode Suite [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance]
E2eNode Suite [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance]
E2eNode Suite [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when true [LinuxOnly] [NodeConformance]
E2eNode Suite [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when true [LinuxOnly] [NodeConformance]
E2eNode Suite [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when true [LinuxOnly] [NodeConformance]
E2eNode Suite [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Summary API [NodeConformance] when querying /stats/summary should report resource usage through the stats api
E2eNode Suite [sig-node] Summary API [NodeConformance] when querying /stats/summary should report resource usage through the stats api
E2eNode Suite [sig-node] Summary API [NodeConformance] when querying /stats/summary should report resource usage through the stats api
E2eNode Suite [sig-node] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance]
E2eNode Suite [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance]
E2eNode Suite [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance]
E2eNode Suite [sig-storage] HostPath should support r/w [NodeConformance]
E2eNode Suite [sig-storage] HostPath should support r/w [NodeConformance]
E2eNode Suite [sig-storage] HostPath should support r/w [NodeConformance]
E2eNode Suite [sig-storage] HostPath should support subPath [NodeConformance]
E2eNode Suite [sig-storage] HostPath should support subPath [NodeConformance]
E2eNode Suite [sig-storage] HostPath should support subPath [NodeConformance]
E2eNode Suite [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected secret should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance]
E2eNode Suite [sig-storage] Projected secret should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance]
E2eNode Suite [sig-storage] Projected secret should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance]
E2eNode Suite [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
kubetest Deferred TearDown
kubetest DumpClusterLogs
kubetest GetDeployer
kubetest Prepare
kubetest TearDown
kubetest TearDown Previous
kubetest Timeout
kubetest Up
kubetest test setup
E2eNode Suite [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: sctp [LinuxOnly][Feature:SCTPConnectivity][Disruptive]
E2eNode Suite [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: sctp [LinuxOnly][Feature:SCTPConnectivity][Disruptive]
E2eNode Suite [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: sctp [LinuxOnly][Feature:SCTPConnectivity][Disruptive]
E2eNode Suite [sig-network] Networking Granular Checks: Pods should function for node-pod communication: sctp [LinuxOnly][Feature:SCTPConnectivity][Disruptive]
E2eNode Suite [sig-network] Networking Granular Checks: Pods should function for node-pod communication: sctp [LinuxOnly][Feature:SCTPConnectivity][Disruptive]
E2eNode Suite [sig-network] Networking Granular Checks: Pods should function for node-pod communication: sctp [LinuxOnly][Feature:SCTPConnectivity][Disruptive]
E2eNode Suite [sig-node] AppArmor [Feature:AppArmor][NodeFeature:AppArmor] when running with AppArmor should enforce a permissive profile
E2eNode Suite [sig-node] AppArmor [Feature:AppArmor][NodeFeature:AppArmor] when running with AppArmor should enforce a permissive profile
E2eNode Suite [sig-node] AppArmor [Feature:AppArmor][NodeFeature:AppArmor] when running with AppArmor should enforce a permissive profile
E2eNode Suite [sig-node] AppArmor [Feature:AppArmor][NodeFeature:AppArmor] when running with AppArmor should enforce a profile blocking writes
E2eNode Suite [sig-node] AppArmor [Feature:AppArmor][NodeFeature:AppArmor] when running with AppArmor should enforce a profile blocking writes
E2eNode Suite [sig-node] AppArmor [Feature:AppArmor][NodeFeature:AppArmor] when running with AppArmor should enforce a profile blocking writes
E2eNode Suite [sig-node] AppArmor [Feature:AppArmor][NodeFeature:AppArmor] when running with AppArmor should reject an unloaded profile
E2eNode Suite [sig-node] AppArmor [Feature:AppArmor][NodeFeature:AppArmor] when running with AppArmor should reject an unloaded profile
E2eNode Suite [sig-node] AppArmor [Feature:AppArmor][NodeFeature:AppArmor] when running with AppArmor should reject an unloaded profile
E2eNode Suite [sig-node] CPU Manager [Serial] [Feature:CPUManager][NodeAlphaFeature:CPUManager] With kubeconfig updated with static CPU Manager policy run the CPU Manager tests should assign CPUs as expected based on the Pod spec
E2eNode Suite [sig-node] CPU Manager [Serial] [Feature:CPUManager][NodeAlphaFeature:CPUManager] With kubeconfig updated with static CPU Manager policy run the CPU Manager tests should assign CPUs as expected based on the Pod spec
E2eNode Suite [sig-node] CPU Manager [Serial] [Feature:CPUManager][NodeAlphaFeature:CPUManager] With kubeconfig updated with static CPU Manager policy run the CPU Manager tests should assign CPUs as expected based on the Pod spec
E2eNode Suite [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]
E2eNode Suite [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]
E2eNode Suite [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]
E2eNode Suite [sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]
E2eNode Suite [sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]
E2eNode Suite [sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]
E2eNode Suite [sig-node] ConfigMap should update ConfigMap successfully
E2eNode Suite [sig-node] ConfigMap should update ConfigMap successfully
E2eNode Suite [sig-node] ConfigMap should update ConfigMap successfully
E2eNode Suite [sig-node] Container Manager Misc [Serial] Validate OOM score adjustments [NodeFeature:OOMScoreAdj] once the node is setup pod infra containers oom-score-adj should be -998 and best effort container's should be 1000
E2eNode Suite [sig-node] Container Manager Misc [Serial] Validate OOM score adjustments [NodeFeature:OOMScoreAdj] once the node is setup pod infra containers oom-score-adj should be -998 and best effort container's should be 1000
E2eNode Suite [sig-node] Container Manager Misc [Serial] Validate OOM score adjustments [NodeFeature:OOMScoreAdj] once the node is setup pod infra containers oom-score-adj should be -998 and best effort container's should be 1000
E2eNode Suite [sig-node] Container Manager Misc [Serial] Validate OOM score adjustments [NodeFeature:OOMScoreAdj] once the node is setup Kubelet's oom-score-adj should be -999
E2eNode Suite [sig-node] Container Manager Misc [Serial] Validate OOM score adjustments [NodeFeature:OOMScoreAdj] once the node is setup Kubelet's oom-score-adj should be -999
E2eNode Suite [sig-node] Container Manager Misc [Serial] Validate OOM score adjustments [NodeFeature:OOMScoreAdj] once the node is setup Kubelet's oom-score-adj should be -999
E2eNode Suite [sig-node] Container Manager Misc [Serial] Validate OOM score adjustments [NodeFeature:OOMScoreAdj] once the node is setup burstable container's oom-score-adj should be between [2, 1000)
E2eNode Suite [sig-node] Container Manager Misc [Serial] Validate OOM score adjustments [NodeFeature:OOMScoreAdj] once the node is setup burstable container's oom-score-adj should be between [2, 1000)
E2eNode Suite [sig-node] Container Manager Misc [Serial] Validate OOM score adjustments [NodeFeature:OOMScoreAdj] once the node is setup burstable container's oom-score-adj should be between [2, 1000)
E2eNode Suite [sig-node] Container Manager Misc [Serial] Validate OOM score adjustments [NodeFeature:OOMScoreAdj] once the node is setup container runtime's oom-score-adj should be -999
E2eNode Suite [sig-node] Container Manager Misc [Serial] Validate OOM score adjustments [NodeFeature:OOMScoreAdj] once the node is setup container runtime's oom-score-adj should be -999
E2eNode Suite [sig-node] Container Manager Misc [Serial] Validate OOM score adjustments [NodeFeature:OOMScoreAdj] once the node is setup container runtime's oom-score-adj should be -999
E2eNode Suite [sig-node] Container Manager Misc [Serial] Validate OOM score adjustments [NodeFeature:OOMScoreAdj] once the node is setup guaranteed container's oom-score-adj should be -998
E2eNode Suite [sig-node] Container Manager Misc [Serial] Validate OOM score adjustments [NodeFeature:OOMScoreAdj] once the node is setup guaranteed container's oom-score-adj should be -998
E2eNode Suite [sig-node] Container Manager Misc [Serial] Validate OOM score adjustments [NodeFeature:OOMScoreAdj] once the node is setup guaranteed container's oom-score-adj should be -998
E2eNode Suite [sig-node] ContainerLogRotation [Slow] [Serial] [Disruptive] when a container generates a lot of log should be rotated and limited to a fixed amount of files
E2eNode Suite [sig-node] ContainerLogRotation [Slow] [Serial] [Disruptive] when a container generates a lot of log should be rotated and limited to a fixed amount of files
E2eNode Suite [sig-node] ContainerLogRotation [Slow] [Serial] [Disruptive] when a container generates a lot of log should be rotated and limited to a fixed amount of files
E2eNode Suite [sig-node] CriticalPod [Serial] [Disruptive] [NodeFeature:CriticalPod] when we need to admit a critical pod should be able to create and delete a critical pod
E2eNode Suite [sig-node] CriticalPod [Serial] [Disruptive] [NodeFeature:CriticalPod] when we need to admit a critical pod should be able to create and delete a critical pod
E2eNode Suite [sig-node] CriticalPod [Serial] [Disruptive] [NodeFeature:CriticalPod] when we need to admit a critical pod should be able to create and delete a critical pod
E2eNode Suite [sig-node] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 10 pods with 0s interval
E2eNode Suite [sig-node] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 10 pods with 0s interval
E2eNode Suite [sig-node] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 10 pods with 0s interval
E2eNode Suite [sig-node] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 10 pods with 0s interval [Benchmark][NodeSpecialFeature:Benchmark]
E2eNode Suite [sig-node] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 10 pods with 0s interval [Benchmark][NodeSpecialFeature:Benchmark]
E2eNode Suite [sig-node] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 10 pods with 0s interval [Benchmark][NodeSpecialFeature:Benchmark]
E2eNode Suite [sig-node] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 10 pods with 100ms interval [Benchmark][NodeSpecialFeature:Benchmark]
E2eNode Suite [sig-node] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 10 pods with 100ms interval [Benchmark][NodeSpecialFeature:Benchmark]
E2eNode Suite [sig-node] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 10 pods with 100ms interval [Benchmark][NodeSpecialFeature:Benchmark]
E2eNode Suite [sig-node] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 10 pods with 300ms interval [Benchmark][NodeSpecialFeature:Benchmark]
E2eNode Suite [sig-node] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 10 pods with 300ms interval [Benchmark][NodeSpecialFeature:Benchmark]
E2eNode Suite [sig-node] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 10 pods with 300ms interval [Benchmark][NodeSpecialFeature:Benchmark]
E2eNode Suite [sig-node] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 35 pods with 0s interval [Benchmark][NodeSpecialFeature:Benchmark]
E2eNode Suite [sig-node] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 35 pods with 0s interval [Benchmark][NodeSpecialFeature:Benchmark]
E2eNode Suite [sig-node] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 35 pods with 0s interval [Benchmark][NodeSpecialFeature:Benchmark]
E2eNode Suite [sig-node] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 35 pods with 100ms interval [Benchmark][NodeSpecialFeature:Benchmark]
E2eNode Suite [sig-node] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 35 pods with 100ms interval [Benchmark][NodeSpecialFeature:Benchmark]
E2eNode Suite [sig-node] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 35 pods with 100ms interval [Benchmark][NodeSpecialFeature:Benchmark]
E2eNode Suite [sig-node] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 35 pods with 300ms interval [Benchmark][NodeSpecialFeature:Benchmark]
E2eNode Suite [sig-node] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 35 pods with 300ms interval [Benchmark][NodeSpecialFeature:Benchmark]
E2eNode Suite [sig-node] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 35 pods with 300ms interval [Benchmark][NodeSpecialFeature:Benchmark]
E2eNode Suite [sig-node] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 90 pods with 0s interval [Benchmark][NodeSpecialFeature:Benchmark]
E2eNode Suite [sig-node] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 90 pods with 0s interval [Benchmark][NodeSpecialFeature:Benchmark]
E2eNode Suite [sig-node] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 90 pods with 0s interval [Benchmark][NodeSpecialFeature:Benchmark]
E2eNode Suite [sig-node] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 90 pods with 100ms interval [Benchmark][NodeSpecialFeature:Benchmark]
E2eNode Suite [sig-node] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 90 pods with 100ms interval [Benchmark][NodeSpecialFeature:Benchmark]
E2eNode Suite [sig-node] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 90 pods with 100ms interval [Benchmark][NodeSpecialFeature:Benchmark]
E2eNode Suite [sig-node] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 90 pods with 300ms interval [Benchmark][NodeSpecialFeature:Benchmark]
E2eNode Suite [sig-node] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 90 pods with 300ms interval [Benchmark][NodeSpecialFeature:Benchmark]
E2eNode Suite [sig-node] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 90 pods with 300ms interval [Benchmark][NodeSpecialFeature:Benchmark]
E2eNode Suite [sig-node] Density [Serial] [Slow] create a batch of pods with higher API QPS latency/resource should be within limit when create 90 pods with 0s interval (QPS 60) [Benchmark][NodeSpecialFeature:Benchmark]
E2eNode Suite [sig-node] Density [Serial] [Slow] create a batch of pods with higher API QPS latency/resource should be within limit when create 90 pods with 0s interval (QPS 60) [Benchmark][NodeSpecialFeature:Benchmark]
E2eNode Suite [sig-node] Density [Serial] [Slow] create a batch of pods with higher API QPS latency/resource should be within limit when create 90 pods with 0s interval (QPS 60) [Benchmark][NodeSpecialFeature:Benchmark]
E2eNode Suite [sig-node] Density [Serial] [Slow] create a batch of pods with higher API QPS latency/resource should be within limit when create 90 pods with 100ms interval (QPS 60) [Benchmark][NodeSpecialFeature:Benchmark]
E2eNode Suite [sig-node] Density [Serial] [Slow] create a batch of pods with higher API QPS latency/resource should be within limit when create 90 pods with 100ms interval (QPS 60) [Benchmark][NodeSpecialFeature:Benchmark]
E2eNode Suite [sig-node] Density [Serial] [Slow] create a batch of pods with higher API QPS latency/resource should be within limit when create 90 pods with 100ms interval (QPS 60) [Benchmark][NodeSpecialFeature:Benchmark]
E2eNode Suite [sig-node] Density [Serial] [Slow] create a batch of pods with higher API QPS latency/resource should be within limit when create 90 pods with 300ms interval (QPS 60) [Benchmark][NodeSpecialFeature:Benchmark]
E2eNode Suite [sig-node] Density [Serial] [Slow] create a batch of pods with higher API QPS latency/resource should be within limit when create 90 pods with 300ms interval (QPS 60) [Benchmark][NodeSpecialFeature:Benchmark]
E2eNode Suite [sig-node] Density [Serial] [Slow] create a batch of pods with higher API QPS latency/resource should be within limit when create 90 pods with 300ms interval (QPS 60) [Benchmark][NodeSpecialFeature:Benchmark]
E2eNode Suite [sig-node] Density [Serial] [Slow] create a sequence of pods latency/resource should be within limit when create 10 pods with 50 background pods
E2eNode Suite [sig-node] Density [Serial] [Slow] create a sequence of pods latency/resource should be within limit when create 10 pods with 50 background pods
E2eNode Suite [sig-node] Density [Serial] [Slow] create a sequence of pods latency/resource should be within limit when create 10 pods with 50 background pods
E2eNode Suite [sig-node] Density [Serial] [Slow] create a sequence of pods latency/resource should be within limit when create 10 pods with 50 background pods [Benchmark][NodeSpeicalFeature:Benchmark]
E2eNode Suite [sig-node] Density [Serial] [Slow] create a sequence of pods latency/resource should be within limit when create 10 pods with 50 background pods [Benchmark][NodeSpeicalFeature:Benchmark]
E2eNode Suite [sig-node] Density [Serial] [Slow] create a sequence of pods latency/resource should be within limit when create 10 pods with 50 background pods [Benchmark][NodeSpeicalFeature:Benchmark]
E2eNode Suite [sig-node] Density [Serial] [Slow] create a sequence of pods latency/resource should be within limit when create 30 pods with 50 background pods [Benchmark][NodeSpeicalFeature:Benchmark]
E2eNode Suite [sig-node] Density [Serial] [Slow] create a sequence of pods latency/resource should be within limit when create 30 pods with 50 background pods [Benchmark][NodeSpeicalFeature:Benchmark]
E2eNode Suite [sig-node] Density [Serial] [Slow] create a sequence of pods latency/resource should be within limit when create 30 pods with 50 background pods [Benchmark][NodeSpeicalFeature:Benchmark]
E2eNode Suite [sig-node] Density [Serial] [Slow] create a sequence of pods latency/resource should be within limit when create 50 pods with 50 background pods [Benchmark][NodeSpeicalFeature:Benchmark]
E2eNode Suite [sig-node] Density [Serial] [Slow] create a sequence of pods latency/resource should be within limit when create 50 pods with 50 background pods [Benchmark][NodeSpeicalFeature:Benchmark]
E2eNode Suite [sig-node] Density [Serial] [Slow] create a sequence of pods latency/resource should be within limit when create 50 pods with 50 background pods [Benchmark][NodeSpeicalFeature:Benchmark]
E2eNode Suite [sig-node] Device Manager [Serial] [Feature:DeviceManager][NodeFeature:DeviceManager] With SRIOV devices in the system should be able to recover V1 (aka pre-1.20) checkpoint data and reject pods before device re-registration
E2eNode Suite [sig-node] Device Manager [Serial] [Feature:DeviceManager][NodeFeature:DeviceManager] With SRIOV devices in the system should be able to recover V1 (aka pre-1.20) checkpoint data and reject pods before device re-registration
E2eNode Suite [sig-node] Device Manager [Serial] [Feature:DeviceManager][NodeFeature:DeviceManager] With SRIOV devices in the system should be able to recover V1 (aka pre-1.20) checkpoint data and reject pods before device re-registration
E2eNode Suite [sig-node] Device Manager [Serial] [Feature:DeviceManager][NodeFeature:DeviceManager] With SRIOV devices in the system should be able to recover V1 (aka pre-1.20) checkpoint data and update topology info on device re-registration
E2eNode Suite [sig-node] Device Manager [Serial] [Feature:DeviceManager][NodeFeature:DeviceManager] With SRIOV devices in the system should be able to recover V1 (aka pre-1.20) checkpoint data and update topology info on device re-registration
E2eNode Suite [sig-node] Device Manager [Serial] [Feature:DeviceManager][NodeFeature:DeviceManager] With SRIOV devices in the system should be able to recover V1 (aka pre-1.20) checkpoint data and update topology info on device re-registration
E2eNode Suite [sig-node] Device Plugin [Feature:DevicePluginProbe][NodeFeature:DevicePluginProbe][Serial] DevicePlugin Verifies the Kubelet device plugin functionality.
E2eNode Suite [sig-node] Device Plugin [Feature:DevicePluginProbe][NodeFeature:DevicePluginProbe][Serial] DevicePlugin Verifies the Kubelet device plugin functionality.
E2eNode Suite [sig-node] Device Plugin [Feature:DevicePluginProbe][NodeFeature:DevicePluginProbe][Serial] DevicePlugin Verifies the Kubelet device plugin functionality.
E2eNode Suite [sig-node] Docker features [Feature:Docker][Legacy:Docker] when live-restore is enabled [Serial] [Slow] [Disruptive] containers should not be disrupted when the daemon shuts down and restarts
E2eNode Suite [sig-node] Docker features [Feature:Docker][Legacy:Docker] when live-restore is enabled [Serial] [Slow] [Disruptive] containers should not be disrupted when the daemon shuts down and restarts
E2eNode Suite [sig-node] Docker features [Feature:Docker][Legacy:Docker] when live-restore is enabled [Serial] [Slow] [Disruptive] containers should not be disrupted when the daemon shuts down and restarts
E2eNode Suite [sig-node] Dockershim [Serial] [Disruptive] [Feature:Docker][Legacy:Docker] When all containers in pod are missing should complete pod sandbox clean up based on the information in sandbox checkpoint
E2eNode Suite [sig-node] Dockershim [Serial] [Disruptive] [Feature:Docker][Legacy:Docker] When all containers in pod are missing should complete pod sandbox clean up based on the information in sandbox checkpoint
E2eNode Suite [sig-node] Dockershim [Serial] [Disruptive] [Feature:Docker][Legacy:Docker] When all containers in pod are missing should complete pod sandbox clean up based on the information in sandbox checkpoint
E2eNode Suite [sig-node] Dockershim [Serial] [Disruptive] [Feature:Docker][Legacy:Docker] When checkpoint file is corrupted should complete pod sandbox clean up
E2eNode Suite [sig-node] Dockershim [Serial] [Disruptive] [Feature:Docker][Legacy:Docker] When checkpoint file is corrupted should complete pod sandbox clean up
E2eNode Suite [sig-node] Dockershim [Serial] [Disruptive] [Feature:Docker][Legacy:Docker] When checkpoint file is corrupted should complete pod sandbox clean up
E2eNode Suite [sig-node] Dockershim [Serial] [Disruptive] [Feature:Docker][Legacy:Docker] When pod sandbox checkpoint is missing should complete pod sandbox clean up
E2eNode Suite [sig-node] Dockershim [Serial] [Disruptive] [Feature:Docker][Legacy:Docker] When pod sandbox checkpoint is missing should complete pod sandbox clean up
E2eNode Suite [sig-node] Dockershim [Serial] [Disruptive] [Feature:Docker][Legacy:Docker] When pod sandbox checkpoint is missing should complete pod sandbox clean up
E2eNode Suite [sig-node] Dockershim [Serial] [Disruptive] [Feature:Docker][Legacy:Docker] should clean up pod sandbox checkpoint after pod deletion
E2eNode Suite [sig-node] Dockershim [Serial] [Disruptive] [Feature:Docker][Legacy:Docker] should clean up pod sandbox checkpoint after pod deletion
E2eNode Suite [sig-node] Dockershim [Serial] [Disruptive] [Feature:Docker][Legacy:Docker] should clean up pod sandbox checkpoint after pod deletion
E2eNode Suite [sig-node] Dockershim [Serial] [Disruptive] [Feature:Docker][Legacy:Docker] should remove dangling checkpoint file
E2eNode Suite [sig-node] Dockershim [Serial] [Disruptive] [Feature:Docker][Legacy:Docker] should remove dangling checkpoint file
E2eNode Suite [sig-node] Dockershim [Serial] [Disruptive] [Feature:Docker][Legacy:Docker] should remove dangling checkpoint file
E2eNode Suite [sig-node] Downward API [Serial] [Disruptive] [NodeFeature:DownwardAPIHugePages] Downward API tests for hugepages should provide container's limits.hugepages-<pagesize> and requests.hugepages-<pagesize> as env vars
E2eNode Suite [sig-node] Downward API [Serial] [Disruptive] [NodeFeature:DownwardAPIHugePages] Downward API tests for hugepages should provide container's limits.hugepages-<pagesize> and requests.hugepages-<pagesize> as env vars
E2eNode Suite [sig-node] Downward API [Serial] [Disruptive] [NodeFeature:DownwardAPIHugePages] Downward API tests for hugepages should provide container's limits.hugepages-<pagesize> and requests.hugepages-<pagesize> as env vars
E2eNode Suite [sig-node] Downward API [Serial] [Disruptive] [NodeFeature:DownwardAPIHugePages] Downward API tests for hugepages should provide default limits.hugepages-<pagesize> from node allocatable
E2eNode Suite [sig-node] Downward API [Serial] [Disruptive] [NodeFeature:DownwardAPIHugePages] Downward API tests for hugepages should provide default limits.hugepages-<pagesize> from node allocatable
E2eNode Suite [sig-node] Downward API [Serial] [Disruptive] [NodeFeature:DownwardAPIHugePages] Downward API tests for hugepages should provide default limits.hugepages-<pagesize> from node allocatable
E2eNode Suite [sig-node] Downward API should provide host IP and pod IP as an env var if pod uses host network [LinuxOnly]
E2eNode Suite [sig-node] Downward API should provide host IP and pod IP as an env var if pod uses host network [LinuxOnly]
E2eNode Suite [sig-node] Downward API should provide host IP and pod IP as an env var if pod uses host network [LinuxOnly]
E2eNode Suite [sig-node] GarbageCollect [Serial][NodeFeature:GarbageCollect] Garbage Collection Test: Many Pods with Many Restarting Containers Should eventually garbage collect containers when we exceed the number of dead containers per container
E2eNode Suite [sig-node] GarbageCollect [Serial][NodeFeature:GarbageCollect] Garbage Collection Test: Many Pods with Many Restarting Containers Should eventually garbage collect containers when we exceed the number of dead containers per container
E2eNode Suite [sig-node] GarbageCollect [Serial][NodeFeature:GarbageCollect] Garbage Collection Test: Many Pods with Many Restarting Containers Should eventually garbage collect containers when we exceed the number of dead containers per container
E2eNode Suite [sig-node] GarbageCollect [Serial][NodeFeature:GarbageCollect] Garbage Collection Test: Many Restarting Containers Should eventually garbage collect containers when we exceed the number of dead containers per container
E2eNode Suite [sig-node] GarbageCollect [Serial][NodeFeature:GarbageCollect] Garbage Collection Test: Many Restarting Containers Should eventually garbage collect containers when we exceed the number of dead containers per container
E2eNode Suite [sig-node] GarbageCollect [Serial][NodeFeature:GarbageCollect] Garbage Collection Test: Many Restarting Containers Should eventually garbage collect containers when we exceed the number of dead containers per container
E2eNode Suite [sig-node] GarbageCollect [Serial][NodeFeature:GarbageCollect] Garbage Collection Test: One Non-restarting Container Should eventually garbage collect containers when we exceed the number of dead containers per container
E2eNode Suite [sig-node] GarbageCollect [Serial][NodeFeature:GarbageCollect] Garbage Collection Test: One Non-restarting Container Should eventually garbage collect containers when we exceed the number of dead containers per container
E2eNode Suite [sig-node] GarbageCollect [Serial][NodeFeature:GarbageCollect] Garbage Collection Test: One Non-restarting Container Should eventually garbage collect containers when we exceed the number of dead containers per container
E2eNode Suite [sig-node] GracefulNodeShutdown [Serial] [NodeAlphaFeature:GracefulNodeShutdown] when gracefully shutting down should be able to gracefully shutdown pods with various grace periods
E2eNode Suite [sig-node] GracefulNodeShutdown [Serial] [NodeAlphaFeature:GracefulNodeShutdown] when gracefully shutting down should be able to gracefully shutdown pods with various grace periods
E2eNode Suite [sig-node] GracefulNodeShutdown [Serial] [NodeAlphaFeature:GracefulNodeShutdown] when gracefully shutting down should be able to gracefully shutdown pods with various grace periods
E2eNode Suite [sig-node] GracefulNodeShutdown [Serial] [NodeAlphaFeature:GracefulNodeShutdown] when gracefully shutting down should be able to handle a cancelled shutdown
E2eNode Suite [sig-node] GracefulNodeShutdown [Serial] [NodeAlphaFeature:GracefulNodeShutdown] when gracefully shutting down should be able to handle a cancelled shutdown
E2eNode Suite [sig-node] GracefulNodeShutdown [Serial] [NodeAlphaFeature:GracefulNodeShutdown] when gracefully shutting down should be able to handle a cancelled shutdown
E2eNode Suite [sig-node] Hostname of Pod [Feature:SetHostnameAsFQDN][NodeFeature:SetHostnameAsFQDN] a pod configured to set FQDN as hostname will remain in Pending state generating FailedCreatePodSandBox events when the FQDN is longer than 64 bytes
E2eNode Suite [sig-node] Hostname of Pod [Feature:SetHostnameAsFQDN][NodeFeature:SetHostnameAsFQDN] a pod configured to set FQDN as hostname will remain in Pending state generating FailedCreatePodSandBox events when the FQDN is longer than 64 bytes
E2eNode Suite [sig-node] Hostname of Pod [Feature:SetHostnameAsFQDN][NodeFeature:SetHostnameAsFQDN] a pod configured to set FQDN as hostname will remain in Pending state generating FailedCreatePodSandBox events when the FQDN is longer than 64 bytes
E2eNode Suite [sig-node] Hostname of Pod [Feature:SetHostnameAsFQDN][NodeFeature:SetHostnameAsFQDN] a pod with subdomain field has FQDN, hostname is shortname
E2eNode Suite [sig-node] Hostname of Pod [Feature:SetHostnameAsFQDN][NodeFeature:SetHostnameAsFQDN] a pod with subdomain field has FQDN, hostname is shortname
E2eNode Suite [sig-node] Hostname of Pod [Feature:SetHostnameAsFQDN][NodeFeature:SetHostnameAsFQDN] a pod with subdomain field has FQDN, hostname is shortname
E2eNode Suite [sig-node] Hostname of Pod [Feature:SetHostnameAsFQDN][NodeFeature:SetHostnameAsFQDN] a pod with subdomain field has FQDN, when setHostnameAsFQDN is set to true, the FQDN is set as hostname
E2eNode Suite [sig-node] Hostname of Pod [Feature:SetHostnameAsFQDN][NodeFeature:SetHostnameAsFQDN] a pod with subdomain field has FQDN, when setHostnameAsFQDN is set to true, the FQDN is set as hostname
E2eNode Suite [sig-node] Hostname of Pod [Feature:SetHostnameAsFQDN][NodeFeature:SetHostnameAsFQDN] a pod with subdomain field has FQDN, when setHostnameAsFQDN is set to true, the FQDN is set as hostname
E2eNode Suite [sig-node] Hostname of Pod [Feature:SetHostnameAsFQDN][NodeFeature:SetHostnameAsFQDN] a pod without FQDN is not affected by SetHostnameAsFQDN field
E2eNode Suite [sig-node] Hostname of Pod [Feature:SetHostnameAsFQDN][NodeFeature:SetHostnameAsFQDN] a pod without FQDN is not affected by SetHostnameAsFQDN field
E2eNode Suite [sig-node] Hostname of Pod [Feature:SetHostnameAsFQDN][NodeFeature:SetHostnameAsFQDN] a pod without FQDN is not affected by SetHostnameAsFQDN field
E2eNode Suite [sig-node] Hostname of Pod [Feature:SetHostnameAsFQDN][NodeFeature:SetHostnameAsFQDN] a pod without subdomain field does not have FQDN
E2eNode Suite [sig-node] Hostname of Pod [Feature:SetHostnameAsFQDN][NodeFeature:SetHostnameAsFQDN] a pod without subdomain field does not have FQDN
E2eNode Suite [sig-node] Hostname of Pod [Feature:SetHostnameAsFQDN][NodeFeature:SetHostnameAsFQDN] a pod without subdomain field does not have FQDN
E2eNode Suite [sig-node] HugePages [Serial] [Feature:HugePages][NodeSpecialFeature:HugePages] should add resources for new huge page sizes on kubelet restart
E2eNode Suite [sig-node] HugePages [Serial] [Feature:HugePages][NodeSpecialFeature:HugePages] should add resources for new huge page sizes on kubelet restart
E2eNode Suite [sig-node] HugePages [Serial] [Feature:HugePages][NodeSpecialFeature:HugePages] should add resources for new huge page sizes on kubelet restart
E2eNode Suite [sig-node] HugePages [Serial] [Feature:HugePages][NodeSpecialFeature:HugePages] should remove resources for huge page sizes no longer supported
E2eNode Suite [sig-node] HugePages [Serial] [Feature:HugePages][NodeSpecialFeature:HugePages] should remove resources for huge page sizes no longer supported
E2eNode Suite [sig-node] HugePages [Serial] [Feature:HugePages][NodeSpecialFeature:HugePages] should remove resources for huge page sizes no longer supported
E2eNode Suite [sig-node] HugePages [Serial] [Feature:HugePages][NodeSpecialFeature:HugePages] when start the pod with the resources requests that contain multiple hugepages resources should set correct hugetlb mount and limit under the container cgroup
E2eNode Suite [sig-node] HugePages [Serial] [Feature:HugePages][NodeSpecialFeature:HugePages] when start the pod with the resources requests that contain multiple hugepages resources should set correct hugetlb mount and limit under the container cgroup
E2eNode Suite [sig-node] HugePages [Serial] [Feature:HugePages][NodeSpecialFeature:HugePages] when start the pod with the resources requests that contain multiple hugepages resources should set correct hugetlb mount and limit under the container cgroup
E2eNode Suite [sig-node] HugePages [Serial] [Feature:HugePages][NodeSpecialFeature:HugePages] when start the pod with the resources requests that contain only one hugepages resource with the backward compatible API should set correct hugetlb mount and limit under the container cgroup
E2eNode Suite [sig-node] HugePages [Serial] [Feature:HugePages][NodeSpecialFeature:HugePages] when start the pod with the resources requests that contain only one hugepages resource with the backward compatible API should set correct hugetlb mount and limit under the container cgroup
E2eNode Suite [sig-node] HugePages [Serial] [Feature:HugePages][NodeSpecialFeature:HugePages] when start the pod with the resources requests that contain only one hugepages resource with the backward compatible API should set correct hugetlb mount and limit under the container cgroup
E2eNode Suite [sig-node] HugePages [Serial] [Feature:HugePages][NodeSpecialFeature:HugePages] when start the pod with the resources requests that contain only one hugepages resource with the new API should set correct hugetlb mount and limit under the container cgroup
E2eNode Suite [sig-node] HugePages [Serial] [Feature:HugePages][NodeSpecialFeature:HugePages] when start the pod with the resources requests that contain only one hugepages resource with the new API should set correct hugetlb mount and limit under the container cgroup
E2eNode Suite [sig-node] HugePages [Serial] [Feature:HugePages][NodeSpecialFeature:HugePages] when start the pod with the resources requests that contain only one hugepages resource with the new API should set correct hugetlb mount and limit under the container cgroup
E2eNode Suite [sig-node] ImageGCNoEviction [Slow] [Serial] [Disruptive][NodeFeature:Eviction] when we run containers that should cause DiskPressure should eventually evict all of the correct pods
E2eNode Suite [sig-node] ImageGCNoEviction [Slow] [Serial] [Disruptive][NodeFeature:Eviction] when we run containers that should cause DiskPressure should eventually evict all of the correct pods
E2eNode Suite [sig-node] ImageGCNoEviction [Slow] [Serial] [Disruptive][NodeFeature:Eviction] when we run containers that should cause DiskPressure should eventually evict all of the correct pods
E2eNode Suite [sig-node] ImageID [NodeFeature: ImageID] should be set to the manifest digest (from RepoDigests) when available
E2eNode Suite [sig-node] ImageID [NodeFeature: ImageID] should be set to the manifest digest (from RepoDigests) when available
E2eNode Suite [sig-node] ImageID [NodeFeature: ImageID] should be set to the manifest digest (from RepoDigests) when available
E2eNode Suite [sig-node] InodeEviction [Slow] [Serial] [Disruptive][NodeFeature:Eviction] when we run containers that should cause DiskPressure should eventually evict all of the correct pods
E2eNode Suite [sig-node] InodeEviction [Slow] [Serial] [Disruptive][NodeFeature:Eviction] when we run containers that should cause DiskPressure should eventually evict all of the correct pods
E2eNode Suite [sig-node] InodeEviction [Slow] [Serial] [Disruptive][NodeFeature:Eviction] when we run containers that should cause DiskPressure should eventually evict all of the correct pods
E2eNode Suite [sig-node] Kubelet PodOverhead handling [LinuxOnly] PodOverhead cgroup accounting On running pod with PodOverhead defined Pod cgroup should be sum of overhead and resource limits
E2eNode Suite [sig-node] Kubelet PodOverhead handling [LinuxOnly] PodOverhead cgroup accounting On running pod with PodOverhead defined Pod cgroup should be sum of overhead and resource limits
E2eNode Suite [sig-node] Kubelet PodOverhead handling [LinuxOnly] PodOverhead cgroup accounting On running pod with PodOverhead defined Pod cgroup should be sum of overhead and resource limits
E2eNode Suite [sig-node] Lease lease API should be available [Conformance]
E2eNode Suite [sig-node] Lease lease API should be available [Conformance]
E2eNode Suite [sig-node] Lease lease API should be available [Conformance]
E2eNode Suite [sig-node] LocalStorageCapacityIsolationEviction [Slow] [Serial] [Disruptive] [Feature:LocalStorageCapacityIsolation][NodeFeature:Eviction] when we run containers that should cause evictions due to pod local storage violations should eventually evict all of the correct pods
E2eNode Suite [sig-node] LocalStorageCapacityIsolationEviction [Slow] [Serial] [Disruptive] [Feature:LocalStorageCapacityIsolation][NodeFeature:Eviction] when we run containers that should cause evictions due to pod local storage violations should eventually evict all of the correct pods
E2eNode Suite [sig-node] LocalStorageCapacityIsolationEviction [Slow] [Serial] [Disruptive] [Feature:LocalStorageCapacityIsolation][NodeFeature:Eviction] when we run containers that should cause evictions due to pod local storage violations should eventually evict all of the correct pods
E2eNode Suite [sig-node] LocalStorageCapacityIsolationQuotaMonitoring [Slow] [Serial] [Disruptive] [Feature:LocalStorageCapacityIsolationQuota][NodeFeature:LSCIQuotaMonitoring] when we run containers that should cause use quotas for LSCI monitoring (quotas enabled: false) should eventually evict all of the correct pods
E2eNode Suite [sig-node] LocalStorageCapacityIsolationQuotaMonitoring [Slow] [Serial] [Disruptive] [Feature:LocalStorageCapacityIsolationQuota][NodeFeature:LSCIQuotaMonitoring] when we run containers that should cause use quotas for LSCI monitoring (quotas enabled: false) should eventually evict all of the correct pods
E2eNode Suite [sig-node] LocalStorageCapacityIsolationQuotaMonitoring [Slow] [Serial] [Disruptive] [Feature:LocalStorageCapacityIsolationQuota][NodeFeature:LSCIQuotaMonitoring] when we run containers that should cause use quotas for LSCI monitoring (quotas enabled: false) should eventually evict all of the correct pods
E2eNode Suite [sig-node] LocalStorageCapacityIsolationQuotaMonitoring [Slow] [Serial] [Disruptive] [Feature:LocalStorageCapacityIsolationQuota][NodeFeature:LSCIQuotaMonitoring] when we run containers that should cause use quotas for LSCI monitoring (quotas enabled: true) should eventually evict all of the correct pods
E2eNode Suite [sig-node] LocalStorageCapacityIsolationQuotaMonitoring [Slow] [Serial] [Disruptive] [Feature:LocalStorageCapacityIsolationQuota][NodeFeature:LSCIQuotaMonitoring] when we run containers that should cause use quotas for LSCI monitoring (quotas enabled: true) should eventually evict all of the correct pods
E2eNode Suite [sig-node] LocalStorageCapacityIsolationQuotaMonitoring [Slow] [Serial] [Disruptive] [Feature:LocalStorageCapacityIsolationQuota][NodeFeature:LSCIQuotaMonitoring] when we run containers that should cause use quotas for LSCI monitoring (quotas enabled: true) should eventually evict all of the correct pods
E2eNode Suite [sig-node] LocalStorageEviction [Slow] [Serial] [Disruptive][NodeFeature:Eviction] when we run containers that should cause DiskPressure should eventually evict all of the correct pods
E2eNode Suite [sig-node] LocalStorageEviction [Slow] [Serial] [Disruptive][NodeFeature:Eviction] when we run containers that should cause DiskPressure should eventually evict all of the correct pods
E2eNode Suite [sig-node] LocalStorageEviction [Slow] [Serial] [Disruptive][NodeFeature:Eviction] when we run containers that should cause DiskPressure should eventually evict all of the correct pods
E2eNode Suite [sig-node] LocalStorageSoftEviction [Slow] [Serial] [Disruptive][NodeFeature:Eviction] when we run containers that should cause DiskPressure should eventually evict all of the correct pods
E2eNode Suite [sig-node] LocalStorageSoftEviction [Slow] [Serial] [Disruptive][NodeFeature:Eviction] when we run containers that should cause DiskPressure should eventually evict all of the correct pods
E2eNode Suite [sig-node] LocalStorageSoftEviction [Slow] [Serial] [Disruptive][NodeFeature:Eviction] when we run containers that should cause DiskPressure should eventually evict all of the correct pods
E2eNode Suite [sig-node] Memory Manager [Serial] [Feature:MemoryManager][NodeAlphaFeature:MemoryManager] with none policy should succeed to start the pod
E2eNode Suite [sig-node] Memory Manager [Serial] [Feature:MemoryManager][NodeAlphaFeature:MemoryManager] with none policy should succeed to start the pod
E2eNode Suite [sig-node] Memory Manager [Serial] [Feature:MemoryManager][NodeAlphaFeature:MemoryManager] with none policy should succeed to start the pod
E2eNode Suite [sig-node] Memory Manager [Serial] [Feature:MemoryManager][NodeAlphaFeature:MemoryManager] with static policy when guaranteed pod has init and app containers should succeed to start the pod
E2eNode Suite [sig-node] Memory Manager [Serial] [Feature:MemoryManager][NodeAlphaFeature:MemoryManager] with static policy when guaranteed pod has init and app containers should succeed to start the pod
E2eNode Suite [sig-node] Memory Manager [Serial] [Feature:MemoryManager][NodeAlphaFeature:MemoryManager] with static policy when guaranteed pod has init and app containers should succeed to start the pod
E2eNode Suite [sig-node] Memory Manager [Serial] [Feature:MemoryManager][NodeAlphaFeature:MemoryManager] with static policy when guaranteed pod has only app containers should succeed to start the pod
E2eNode Suite [sig-node] Memory Manager [Serial] [Feature:MemoryManager][NodeAlphaFeature:MemoryManager] with static policy when guaranteed pod has only app containers should succeed to start the pod
E2eNode Suite [sig-node] Memory Manager [Serial] [Feature:MemoryManager][NodeAlphaFeature:MemoryManager] with static policy when guaranteed pod has only app containers should succeed to start the pod
E2eNode Suite [sig-node] Memory Manager [Serial] [Feature:MemoryManager][NodeAlphaFeature:MemoryManager] with static policy when guaranteed pod memory request is bigger than free memory on each NUMA node should be rejected
E2eNode Suite [sig-node] Memory Manager [Serial] [Feature:MemoryManager][NodeAlphaFeature:MemoryManager] with static policy when guaranteed pod memory request is bigger than free memory on each NUMA node should be rejected
E2eNode Suite [sig-node] Memory Manager [Serial] [Feature:MemoryManager][NodeAlphaFeature:MemoryManager] with static policy when guaranteed pod memory request is bigger than free memory on each NUMA node should be rejected
E2eNode Suite [sig-node] Memory Manager [Serial] [Feature:MemoryManager][NodeAlphaFeature:MemoryManager] with static policy when multiple guaranteed pods started should succeed to start all pods
E2eNode Suite [sig-node] Memory Manager [Serial] [Feature:MemoryManager][NodeAlphaFeature:MemoryManager] with static policy when multiple guaranteed pods started should succeed to start all pods
E2eNode Suite [sig-node] Memory Manager [Serial] [Feature:MemoryManager][NodeAlphaFeature:MemoryManager] with static policy when multiple guaranteed pods started should succeed to start all pods
E2eNode Suite [sig-node] MemoryAllocatableEviction [Slow] [Serial] [Disruptive][NodeFeature:Eviction] when we run containers that should cause MemoryPressure should eventually evict all of the correct pods
E2eNode Suite [sig-node] MemoryAllocatableEviction [Slow] [Serial] [Disruptive][NodeFeature:Eviction] when we run containers that should cause MemoryPressure should eventually evict all of the correct pods
E2eNode Suite [sig-node] MemoryAllocatableEviction [Slow] [Serial] [Disruptive][NodeFeature:Eviction] when we run containers that should cause MemoryPressure should eventually evict all of the correct pods
E2eNode Suite [sig-node] NVIDIA GPU Device Plugin [Feature:GPUDevicePlugin][NodeFeature:GPUDevicePlugin][Serial] [Disruptive] DevicePlugin checks that when Kubelet restarts exclusive GPU assignation to pods is kept.
E2eNode Suite [sig-node] NVIDIA GPU Device Plugin [Feature:GPUDevicePlugin][NodeFeature:GPUDevicePlugin][Serial] [Disruptive] DevicePlugin checks that when Kubelet restarts exclusive GPU assignation to pods is kept.
E2eNode Suite [sig-node] NVIDIA GPU Device Plugin [Feature:GPUDevicePlugin][NodeFeature:GPUDevicePlugin][Serial] [Disruptive] DevicePlugin checks that when Kubelet restarts exclusive GPU assignation to pods is kept.
E2eNode Suite [sig-node] Node Container Manager [Serial] Validate Node Allocatable [NodeFeature:NodeAllocatable] sets up the node and runs the test
E2eNode Suite [sig-node] Node Container Manager [Serial] Validate Node Allocatable [NodeFeature:NodeAllocatable] sets up the node and runs the test
E2eNode Suite [sig-node] Node Container Manager [Serial] Validate Node Allocatable [NodeFeature:NodeAllocatable] sets up the node and runs the test
E2eNode Suite [sig-node] Node Performance Testing [Serial] [Slow] Run node performance testing with pre-defined workloads NAS parallel benchmark (NPB) suite - Embarrassingly Parallel (EP) workload
E2eNode Suite [sig-node] Node Performance Testing [Serial] [Slow] Run node performance testing with pre-defined workloads NAS parallel benchmark (NPB) suite - Embarrassingly Parallel (EP) workload
E2eNode Suite [sig-node] Node Performance Testing [Serial] [Slow] Run node performance testing with pre-defined workloads NAS parallel benchmark (NPB) suite - Embarrassingly Parallel (EP) workload
E2eNode Suite [sig-node] Node Performance Testing [Serial] [Slow] Run node performance testing with pre-defined workloads NAS parallel benchmark (NPB) suite - Integer Sort (IS) workload
E2eNode Suite [sig-node] Node Performance Testing [Serial] [Slow] Run node performance testing with pre-defined workloads NAS parallel benchmark (NPB) suite - Integer Sort (IS) workload
E2eNode Suite [sig-node] Node Performance Testing [Serial] [Slow] Run node performance testing with pre-defined workloads NAS parallel benchmark (NPB) suite - Integer Sort (IS) workload
E2eNode Suite [sig-node] Node Performance Testing [Serial] [Slow] Run node performance testing with pre-defined workloads TensorFlow workload
E2eNode Suite [sig-node] Node Performance Testing [Serial] [Slow] Run node performance testing with pre-defined workloads TensorFlow workload
E2eNode Suite [sig-node] Node Performance Testing [Serial] [Slow] Run node performance testing with pre-defined workloads TensorFlow workload
E2eNode Suite [sig-node] NodeLease when the NodeLease feature is enabled should have OwnerReferences set
E2eNode Suite [sig-node] NodeLease when the NodeLease feature is enabled should have OwnerReferences set
E2eNode Suite [sig-node] NodeLease when the NodeLease feature is enabled should have OwnerReferences set
E2eNode Suite [sig-node] NodeLease when the NodeLease feature is enabled the kubelet should create and update a lease in the kube-node-lease namespace
E2eNode Suite [sig-node] NodeLease when the NodeLease feature is enabled the kubelet should create and update a lease in the kube-node-lease namespace
E2eNode Suite [sig-node] NodeLease when the NodeLease feature is enabled the kubelet should create and update a lease in the kube-node-lease namespace
E2eNode Suite [sig-node] NodeLease when the NodeLease feature is enabled the kubelet should report node status infrequently
E2eNode Suite [sig-node] NodeLease when the NodeLease feature is enabled the kubelet should report node status infrequently
E2eNode Suite [sig-node] NodeLease when the NodeLease feature is enabled the kubelet should report node status infrequently
E2eNode Suite [sig-node] NodeProblemDetector [NodeFeature:NodeProblemDetector] [Serial] SystemLogMonitor should generate node condition and events for corresponding errors
E2eNode Suite [sig-node] NodeProblemDetector [NodeFeature:NodeProblemDetector] [Serial] SystemLogMonitor should generate node condition and events for corresponding errors
E2eNode Suite [sig-node] NodeProblemDetector [NodeFeature:NodeProblemDetector] [Serial] SystemLogMonitor should generate node condition and events for corresponding errors
E2eNode Suite [sig-node] POD Resources [Serial] [Feature:PodResources][NodeFeature:PodResources] With SRIOV devices in the system should return the expected responses with cpumanager none policy
E2eNode Suite [sig-node] POD Resources [Serial] [Feature:PodResources][NodeFeature:PodResources] With SRIOV devices in the system should return the expected responses with cpumanager none policy
E2eNode Suite [sig-node] POD Resources [Serial] [Feature:PodResources][NodeFeature:PodResources] With SRIOV devices in the system should return the expected responses with cpumanager none policy
E2eNode Suite [sig-node] POD Resources [Serial] [Feature:PodResources][NodeFeature:PodResources] With SRIOV devices in the system should return the expected responses with cpumanager static policy enabled
E2eNode Suite [sig-node] POD Resources [Serial] [Feature:PodResources][NodeFeature:PodResources] With SRIOV devices in the system should return the expected responses with cpumanager static policy enabled
E2eNode Suite [sig-node] POD Resources [Serial] [Feature:PodResources][NodeFeature:PodResources] With SRIOV devices in the system should return the expected responses with cpumanager static policy enabled
E2eNode Suite [sig-node] POD Resources [Serial] [Feature:PodResources][NodeFeature:PodResources] Without SRIOV devices in the system should return the expected error with the feature gate disabled
E2eNode Suite [sig-node] POD Resources [Serial] [Feature:PodResources][NodeFeature:PodResources] Without SRIOV devices in the system should return the expected error with the feature gate disabled
E2eNode Suite [sig-node] POD Resources [Serial] [Feature:PodResources][NodeFeature:PodResources] Without SRIOV devices in the system should return the expected error with the feature gate disabled
E2eNode Suite [sig-node] POD Resources [Serial] [Feature:PodResources][NodeFeature:PodResources] Without SRIOV devices in the system should return the expected responses with cpumanager none policy
E2eNode Suite [sig-node] POD Resources [Serial] [Feature:PodResources][NodeFeature:PodResources] Without SRIOV devices in the system should return the expected responses with cpumanager none policy
E2eNode Suite [sig-node] POD Resources [Serial] [Feature:PodResources][NodeFeature:PodResources] Without SRIOV devices in the system should return the expected responses with cpumanager none policy
E2eNode Suite [sig-node] POD Resources [Serial] [Feature:PodResources][NodeFeature:PodResources] Without SRIOV devices in the system should return the expected responses with cpumanager static policy enabled
E2eNode Suite [sig-node] POD Resources [Serial] [Feature:PodResources][NodeFeature:PodResources] Without SRIOV devices in the system should return the expected responses with cpumanager static policy enabled
E2eNode Suite [sig-node] POD Resources [Serial] [Feature:PodResources][NodeFeature:PodResources] Without SRIOV devices in the system should return the expected responses with cpumanager static policy enabled
E2eNode Suite [sig-node] PodPidsLimit [Serial] With config updated with pids limits should set pids.max for Pod
E2eNode Suite [sig-node] PodPidsLimit [Serial] With config updated with pids limits should set pids.max for Pod
E2eNode Suite [sig-node] PodPidsLimit [Serial] With config updated with pids limits should set pids.max for Pod
E2eNode Suite [sig-node] PodTemplates should delete a collection of pod templates [Conformance]
E2eNode Suite [sig-node] PodTemplates should delete a collection of pod templates [Conformance]
E2eNode Suite [sig-node] PodTemplates should delete a collection of pod templates [Conformance]
E2eNode Suite [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance]
E2eNode Suite [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance]
E2eNode Suite [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance]
E2eNode Suite [sig-node] Pods should cap back-off at MaxContainerBackOff [Slow][NodeConformance]
E2eNode Suite [sig-node] Pods should cap back-off at MaxContainerBackOff [Slow][NodeConformance]
E2eNode Suite [sig-node] Pods should cap back-off at MaxContainerBackOff [Slow][NodeConformance]
E2eNode Suite [sig-node] Pods should delete a collection of pods [Conformance]
E2eNode Suite [sig-node] Pods should delete a collection of pods [Conformance]
E2eNode Suite [sig-node] Pods should delete a collection of pods [Conformance]
E2eNode Suite [sig-node] Pods should have their auto-restart back-off timer reset on image update [Slow][NodeConformance]
E2eNode Suite [sig-node] Pods should have their auto-restart back-off timer reset on image update [Slow][NodeConformance]
E2eNode Suite [sig-node] Pods should have their auto-restart back-off timer reset on image update [Slow][NodeConformance]
E2eNode Suite [sig-node] Pods should run through the lifecycle of Pods and PodStatus [Conformance]
E2eNode Suite [sig-node] Pods should run through the lifecycle of Pods and PodStatus [Conformance]
E2eNode Suite [sig-node] Pods should run through the lifecycle of Pods and PodStatus [Conformance]
E2eNode Suite [sig-node] Pods should support pod readiness gates [NodeFeature:PodReadinessGate]
E2eNode Suite [sig-node] Pods should support pod readiness gates [NodeFeature:PodReadinessGate]
E2eNode Suite [sig-node] Pods should support pod readiness gates [NodeFeature:PodReadinessGate]
E2eNode Suite [sig-node] PriorityLocalStorageEvictionOrdering [Slow] [Serial] [Disruptive][NodeFeature:Eviction] when we run containers that should cause DiskPressure should eventually evict all of the correct pods
E2eNode Suite [sig-node] PriorityLocalStorageEvictionOrdering [Slow] [Serial] [Disruptive][NodeFeature:Eviction] when we run containers that should cause DiskPressure should eventually evict all of the correct pods
E2eNode Suite [sig-node] PriorityLocalStorageEvictionOrdering [Slow] [Serial] [Disruptive][NodeFeature:Eviction] when we run containers that should cause DiskPressure should eventually evict all of the correct pods
E2eNode Suite [sig-node] PriorityMemoryEvictionOrdering [Slow] [Serial] [Disruptive][NodeFeature:Eviction] when we run containers that should cause MemoryPressure should eventually evict all of the correct pods
E2eNode Suite [sig-node] PriorityMemoryEvictionOrdering [Slow] [Serial] [Disruptive][NodeFeature:Eviction] when we run containers that should cause MemoryPressure should eventually evict all of the correct pods
E2eNode Suite [sig-node] PriorityMemoryEvictionOrdering [Slow] [Serial] [Disruptive][NodeFeature:Eviction] when we run containers that should cause MemoryPressure should eventually evict all of the correct pods
E2eNode Suite [sig-node] PriorityPidEvictionOrdering [Slow] [Serial] [Disruptive][NodeFeature:Eviction] when we run containers that should cause PIDPressure should eventually evict all of the correct pods
E2eNode Suite [sig-node] PriorityPidEvictionOrdering [Slow] [Serial] [Disruptive][NodeFeature:Eviction] when we run containers that should cause PIDPressure should eventually evict all of the correct pods
E2eNode Suite [sig-node] PriorityPidEvictionOrdering [Slow] [Serial] [Disruptive][NodeFeature:Eviction] when we run containers that should cause PIDPressure should eventually evict all of the correct pods
E2eNode Suite [sig-node] Probing container should *not* be restarted by liveness probe because startup probe delays it
E2eNode Suite [sig-node] Probing container should *not* be restarted by liveness probe because startup probe delays it
E2eNode Suite [sig-node] Probing container should *not* be restarted by liveness probe because startup probe delays it
E2eNode Suite [sig-node] Probing container should *not* be restarted with a non-local redirect http liveness probe
E2eNode Suite [sig-node] Probing container should *not* be restarted with a non-local redirect http liveness probe
E2eNode Suite [sig-node] Probing container should *not* be restarted with a non-local redirect http liveness probe
E2eNode Suite [sig-node] Probing container should be ready immediately after startupProbe succeeds
E2eNode Suite [sig-node] Probing container should be ready immediately after startupProbe succeeds
E2eNode Suite [sig-node] Probing container should be ready immediately after startupProbe succeeds
E2eNode Suite [sig-node] Probing container should be restarted by liveness probe after startup probe enables it
E2eNode Suite [sig-node] Probing container should be restarted by liveness probe after startup probe enables it
E2eNode Suite [sig-node] Probing container should be restarted by liveness probe after startup probe enables it
E2eNode Suite [sig-node] Probing container should be restarted startup probe fails
E2eNode Suite [sig-node] Probing container should be restarted startup probe fails
E2eNode Suite [sig-node] Probing container should be restarted startup probe fails
E2eNode Suite [sig-node] Probing container should be restarted with a failing exec liveness probe that took longer than the timeout
E2eNode Suite [sig-node] Probing container should be restarted with a failing exec liveness probe that took longer than the timeout
E2eNode Suite [sig-node] Probing container should be restarted with a failing exec liveness probe that took longer than the timeout
E2eNode Suite [sig-node] Probing container should be restarted with a local redirect http liveness probe
E2eNode Suite [sig-node] Probing container should be restarted with a local redirect http liveness probe
E2eNode Suite [sig-node] Probing container should be restarted with a local redirect http liveness probe
E2eNode Suite [sig-node] Probing container should override timeoutGracePeriodSeconds when LivenessProbe field is set [Feature:ProbeTerminationGracePeriod]
E2eNode Suite [sig-node] Probing container should override timeoutGracePeriodSeconds when LivenessProbe field is set [Feature:ProbeTerminationGracePeriod]
E2eNode Suite [sig-node] Probing container should override timeoutGracePeriodSeconds when LivenessProbe field is set [Feature:ProbeTerminationGracePeriod]
E2eNode Suite [sig-node] Probing container should override timeoutGracePeriodSeconds when StartupProbe field is set [Feature:ProbeTerminationGracePeriod]
E2eNode Suite [sig-node] Probing container should override timeoutGracePeriodSeconds when StartupProbe field is set [Feature:ProbeTerminationGracePeriod]
E2eNode Suite [sig-node] Probing container should override timeoutGracePeriodSeconds when StartupProbe field is set [Feature:ProbeTerminationGracePeriod]
E2eNode Suite [sig-node] Resource-usage [Serial] [Slow] regular resource usage tracking resource tracking for 0 pods per node [Benchmark]
E2eNode Suite [sig-node] Resource-usage [Serial] [Slow] regular resource usage tracking resource tracking for 0 pods per node [Benchmark]
E2eNode Suite [sig-node] Resource-usage [Serial] [Slow] regular resource usage tracking resource tracking for 0 pods per node [Benchmark]
E2eNode Suite [sig-node] Resource-usage [Serial] [Slow] regular resource usage tracking resource tracking for 10 pods per node
E2eNode Suite [sig-node] Resource-usage [Serial] [Slow] regular resource usage tracking resource tracking for 10 pods per node
E2eNode Suite [sig-node] Resource-usage [Serial] [Slow] regular resource usage tracking resource tracking for 10 pods per node
E2eNode Suite [sig-node] Resource-usage [Serial] [Slow] regular resource usage tracking resource tracking for 10 pods per node [Benchmark]
E2eNode Suite [sig-node] Resource-usage [Serial] [Slow] regular resource usage tracking resource tracking for 10 pods per node [Benchmark]
E2eNode Suite [sig-node] Resource-usage [Serial] [Slow] regular resource usage tracking resource tracking for 10 pods per node [Benchmark]
E2eNode Suite [sig-node] Resource-usage [Serial] [Slow] regular resource usage tracking resource tracking for 35 pods per node [Benchmark]
E2eNode Suite [sig-node] Resource-usage [Serial] [Slow] regular resource usage tracking resource tracking for 35 pods per node [Benchmark]
E2eNode Suite [sig-node] Resource-usage [Serial] [Slow] regular resource usage tracking resource tracking for 35 pods per node [Benchmark]
E2eNode Suite [sig-node] Resource-usage [Serial] [Slow] regular resource usage tracking resource tracking for 90 pods per node [Benchmark]
E2eNode Suite [sig-node] Resource-usage [Serial] [Slow] regular resource usage tracking resource tracking for 90 pods per node [Benchmark]
E2eNode Suite [sig-node] Resource-usage [Serial] [Slow] regular resource usage tracking resource tracking for 90 pods per node [Benchmark]
E2eNode Suite [sig-node] ResourceMetricsAPI [NodeFeature:ResourceMetrics] when querying /resource/metrics should report resource usage through the resource metrics api
E2eNode Suite [sig-node] ResourceMetricsAPI [NodeFeature:ResourceMetrics] when querying /resource/metrics should report resource usage through the resource metrics api
E2eNode Suite [sig-node] ResourceMetricsAPI [NodeFeature:ResourceMetrics] when querying /resource/metrics should report resource usage through the resource metrics api
E2eNode Suite [sig-node] Restart [Serial] [Slow] [Disruptive] [NodeFeature:ContainerRuntimeRestart] Container Runtime Network should recover from ip leak
E2eNode Suite [sig-node] Restart [Serial] [Slow] [Disruptive] [NodeFeature:ContainerRuntimeRestart] Container Runtime Network should recover from ip leak
E2eNode Suite [sig-node] Restart [Serial] [Slow] [Disruptive] [NodeFeature:ContainerRuntimeRestart] Container Runtime Network should recover from ip leak
E2eNode Suite [sig-node] RuntimeClass should support RuntimeClasses API operations [Conformance]
E2eNode Suite [sig-node] RuntimeClass should support RuntimeClasses API operations [Conformance]
E2eNode Suite [sig-node] RuntimeClass should support RuntimeClasses API operations [Conformance]
E2eNode Suite [sig-node] RuntimeClass should reject a Pod requesting a RuntimeClass with an unconfigured handler [NodeFeature:RuntimeHandler]
E2eNode Suite [sig-node] RuntimeClass should reject a Pod requesting a RuntimeClass with an unconfigured handler [NodeFeature:RuntimeHandler]
E2eNode Suite [sig-node] RuntimeClass should reject a Pod requesting a RuntimeClass with an unconfigured handler [NodeFeature:RuntimeHandler]
E2eNode Suite [sig-node] RuntimeClass should reject a Pod requesting a deleted RuntimeClass [NodeFeature:RuntimeHandler]
E2eNode Suite [sig-node] RuntimeClass should reject a Pod requesting a deleted RuntimeClass [NodeFeature:RuntimeHandler]
E2eNode Suite [sig-node] RuntimeClass should reject a Pod requesting a deleted RuntimeClass [NodeFeature:RuntimeHandler]
E2eNode Suite [sig-node] RuntimeClass should reject a Pod requesting a non-existent RuntimeClass [NodeFeature:RuntimeHandler]
E2eNode Suite [sig-node] RuntimeClass should reject a Pod requesting a non-existent RuntimeClass [NodeFeature:RuntimeHandler]
E2eNode Suite [sig-node] RuntimeClass should reject a Pod requesting a non-existent RuntimeClass [NodeFeature:RuntimeHandler]
E2eNode Suite [sig-node] RuntimeClass should run a Pod requesting a RuntimeClass with a configured handler [NodeFeature:RuntimeHandler]
E2eNode Suite [sig-node] RuntimeClass should run a Pod requesting a RuntimeClass with a configured handler [NodeFeature:RuntimeHandler]
E2eNode Suite [sig-node] RuntimeClass should run a Pod requesting a RuntimeClass with a configured handler [NodeFeature:RuntimeHandler]
E2eNode Suite [sig-node] Secrets should fail to create secret due to empty secret key [Conformance]
E2eNode Suite [sig-node] Secrets should fail to create secret due to empty secret key [Conformance]
E2eNode Suite [sig-node] Secrets should fail to create secret due to empty secret key [Conformance]
E2eNode Suite [sig-node] Secrets should patch a secret [Conformance]
E2eNode Suite [sig-node] Secrets should patch a secret [Conformance]
E2eNode Suite [sig-node] Secrets should patch a secret [Conformance]
E2eNode Suite [sig-node] Security Context When creating a container with runAsNonRoot should not run with an explicit root user ID [LinuxOnly]
E2eNode Suite [sig-node] Security Context When creating a container with runAsNonRoot should not run with an explicit root user ID [LinuxOnly]
E2eNode Suite [sig-node] Security Context When creating a container with runAsNonRoot should not run with an explicit root user ID [LinuxOnly]
E2eNode Suite [sig-node] Security Context When creating a container with runAsNonRoot should not run without a specified user ID
E2eNode Suite [sig-node] Security Context When creating a container with runAsNonRoot should not run without a specified user ID
E2eNode Suite [sig-node] Security Context When creating a container with runAsNonRoot should not run without a specified user ID
E2eNode Suite [sig-node] Security Context When creating a container with runAsNonRoot should run with an explicit non-root user ID [LinuxOnly]
E2eNode Suite [sig-node] Security Context When creating a container with runAsNonRoot should run with an explicit non-root user ID [LinuxOnly]
E2eNode Suite [sig-node] Security Context When creating a container with runAsNonRoot should run with an explicit non-root user ID [LinuxOnly]
E2eNode Suite [sig-node] Security Context When creating a container with runAsNonRoot should run with an image specified user ID
E2eNode Suite [sig-node] Security Context When creating a container with runAsNonRoot should run with an image specified user ID
E2eNode Suite [sig-node] Security Context When creating a container with runAsNonRoot should run with an image specified user ID
E2eNode Suite [sig-node] Security Context When creating a pod with privileged should run the container as privileged when true [LinuxOnly] [NodeFeature:HostAccess]
E2eNode Suite [sig-node] Security Context When creating a pod with privileged should run the container as privileged when true [LinuxOnly] [NodeFeature:HostAccess]
E2eNode Suite [sig-node] Security Context When creating a pod with privileged should run the container as privileged when true [LinuxOnly] [NodeFeature:HostAccess]
E2eNode Suite [sig-node] Security Context when creating a pod in the host IPC namespace should not show the shared memory ID in the non-hostIPC containers [NodeFeature:HostAccess]
E2eNode Suite [sig-node] Security Context when creating a pod in the host IPC namespace should not show the shared memory ID in the non-hostIPC containers [NodeFeature:HostAccess]
E2eNode Suite [sig-node] Security Context when creating a pod in the host IPC namespace should not show the shared memory ID in the non-hostIPC containers [NodeFeature:HostAccess]
E2eNode Suite [sig-node] Security Context when creating a pod in the host IPC namespace should show the shared memory ID in the host IPC containers [NodeFeature:HostAccess]
E2eNode Suite [sig-node] Security Context when creating a pod in the host IPC namespace should show the shared memory ID in the host IPC containers [NodeFeature:HostAccess]
E2eNode Suite [sig-node] Security Context when creating a pod in the host IPC namespace should show the shared memory ID in the host IPC containers [NodeFeature:HostAccess]
E2eNode Suite [sig-node] Security Context when creating a pod in the host PID namespace should not show its pid in the non-hostpid containers [NodeFeature:HostAccess]
E2eNode Suite [sig-node] Security Context when creating a pod in the host PID namespace should not show its pid in the non-hostpid containers [NodeFeature:HostAccess]
E2eNode Suite [sig-node] Security Context when creating a pod in the host PID namespace should not show its pid in the non-hostpid containers [NodeFeature:HostAccess]
E2eNode Suite [sig-node] Security Context when creating a pod in the host PID namespace should show its pid in the host PID namespace [NodeFeature:HostAccess]
E2eNode Suite [sig-node] Security Context when creating a pod in the host PID namespace should show its pid in the host PID namespace [NodeFeature:HostAccess]
E2eNode Suite [sig-node] Security Context when creating a pod in the host PID namespace should show its pid in the host PID namespace [NodeFeature:HostAccess]
E2eNode Suite [sig-node] Security Context when creating a pod in the host network namespace should listen on same port in the host network containers [NodeFeature:HostAccess]
E2eNode Suite [sig-node] Security Context when creating a pod in the host network namespace should listen on same port in the host network containers [NodeFeature:HostAccess]
E2eNode Suite [sig-node] Security Context when creating a pod in the host network namespace should listen on same port in the host network containers [NodeFeature:HostAccess]
E2eNode Suite [sig-node] Security Context when creating a pod in the host network namespace shouldn't show the same port in the non-hostnetwork containers [NodeFeature:HostAccess]
E2eNode Suite [sig-node] Security Context when creating a pod in the host network namespace shouldn't show the same port in the non-hostnetwork containers [NodeFeature:HostAccess]
E2eNode Suite [sig-node] Security Context when creating a pod in the host network namespace shouldn't show the same port in the non-hostnetwork containers [NodeFeature:HostAccess]
E2eNode Suite [sig-node] Security Context when pod PID namespace is configurable [Feature:ShareProcessNamespace][NodeAlphaFeature:ShareProcessNamespace] containers in pods using isolated PID namespaces should all receive PID 1
E2eNode Suite [sig-node] Security Context when pod PID namespace is configurable [Feature:ShareProcessNamespace][NodeAlphaFeature:ShareProcessNamespace] containers in pods using isolated PID namespaces should all receive PID 1
E2eNode Suite [sig-node] Security Context when pod PID namespace is configurable [Feature:ShareProcessNamespace][NodeAlphaFeature:ShareProcessNamespace] containers in pods using isolated PID namespaces should all receive PID 1
E2eNode Suite [sig-node] Security Context when pod PID namespace is configurable [Feature:ShareProcessNamespace][NodeAlphaFeature:ShareProcessNamespace] processes in containers sharing a pod namespace should be able to see each other [Alpha]
E2eNode Suite [sig-node] Security Context when pod PID namespace is configurable [Feature:ShareProcessNamespace][NodeAlphaFeature:ShareProcessNamespace] processes in containers sharing a pod namespace should be able to see each other [Alpha]
E2eNode Suite [sig-node] Security Context when pod PID namespace is configurable [Feature:ShareProcessNamespace][NodeAlphaFeature:ShareProcessNamespace] processes in containers sharing a pod namespace should be able to see each other [Alpha]
E2eNode Suite [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should not launch unsafe, but not explicitly enabled sysctls on the node [MinimumKubeletVersion:1.21]
E2eNode Suite [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should not launch unsafe, but not explicitly enabled sysctls on the node [MinimumKubeletVersion:1.21]
E2eNode Suite [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should not launch unsafe, but not explicitly enabled sysctls on the node [MinimumKubeletVersion:1.21]
E2eNode Suite [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should reject invalid sysctls [MinimumKubeletVersion:1.21] [Conformance]
E2eNode Suite [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should reject invalid sysctls [MinimumKubeletVersion:1.21] [Conformance]
E2eNode Suite [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should reject invalid sysctls [MinimumKubeletVersion:1.21] [Conformance]
E2eNode Suite [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should support sysctls [MinimumKubeletVersion:1.21] [Conformance]
E2eNode Suite [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should support sysctls [MinimumKubeletVersion:1.21] [Conformance]
E2eNode Suite [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should support sysctls [MinimumKubeletVersion:1.21] [Conformance]
E2eNode Suite [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should support unsafe sysctls which are actually allowed [MinimumKubeletVersion:1.21] [Conformance]
E2eNode Suite [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should support unsafe sysctls which are actually allowed [MinimumKubeletVersion:1.21] [Conformance]
E2eNode Suite [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should support unsafe sysctls which are actually allowed [MinimumKubeletVersion:1.21] [Conformance]
E2eNode Suite [sig-node] SystemNodeCriticalPod [Slow] [Serial] [Disruptive] [NodeFeature:SystemNodeCriticalPod] when create a system-node-critical pod should not be evicted upon DiskPressure
E2eNode Suite [sig-node] SystemNodeCriticalPod [Slow] [Serial] [Disruptive] [NodeFeature:SystemNodeCriticalPod] when create a system-node-critical pod should not be evicted upon DiskPressure
E2eNode Suite [sig-node] SystemNodeCriticalPod [Slow] [Serial] [Disruptive] [NodeFeature:SystemNodeCriticalPod] when create a system-node-critical pod should not be evicted upon DiskPressure
E2eNode Suite [sig-node] Topology Manager [Serial] [Feature:TopologyManager][NodeFeature:TopologyManager] With kubeconfig updated to static CPU Manager policy run the Topology Manager tests run Topology Manager node alignment test suite
E2eNode Suite [sig-node] Topology Manager [Serial] [Feature:TopologyManager][NodeFeature:TopologyManager] With kubeconfig updated to static CPU Manager policy run the Topology Manager tests run Topology Manager node alignment test suite
E2eNode Suite [sig-node] Topology Manager [Serial] [Feature:TopologyManager][NodeFeature:TopologyManager] With kubeconfig updated to static CPU Manager policy run the Topology Manager tests run Topology Manager node alignment test suite
E2eNode Suite [sig-node] Topology Manager [Serial] [Feature:TopologyManager][NodeFeature:TopologyManager] With kubeconfig updated to static CPU Manager policy run the Topology Manager tests run Topology Manager policy test suite
E2eNode Suite [sig-node] Topology Manager [Serial] [Feature:TopologyManager][NodeFeature:TopologyManager] With kubeconfig updated to static CPU Manager policy run the Topology Manager tests run Topology Manager policy test suite
E2eNode Suite [sig-node] Topology Manager [Serial] [Feature:TopologyManager][NodeFeature:TopologyManager] With kubeconfig updated to static CPU Manager policy run the Topology Manager tests run Topology Manager policy test suite
E2eNode Suite [sig-node] Topology Manager [Serial] [Feature:TopologyManager][NodeFeature:TopologyManager] With kubeconfig updated to static CPU Manager policy run the Topology Manager tests run the Topology Manager pod scope alignment test suite
E2eNode Suite [sig-node] Topology Manager [Serial] [Feature:TopologyManager][NodeFeature:TopologyManager] With kubeconfig updated to static CPU Manager policy run the Topology Manager tests run the Topology Manager pod scope alignment test suite
E2eNode Suite [sig-node] Topology Manager [Serial] [Feature:TopologyManager][NodeFeature:TopologyManager] With kubeconfig updated to static CPU Manager policy run the Topology Manager tests run the Topology Manager pod scope alignment test suite
E2eNode Suite [sig-node] Variable Expansion should allow substituting values in a volume subpath [Conformance]
E2eNode Suite [sig-node] Variable Expansion should allow substituting values in a volume subpath [Conformance]
E2eNode Suite [sig-node] Variable Expansion should allow substituting values in a volume subpath [Conformance]
E2eNode Suite [sig-node] Variable Expansion should fail substituting values in a volume subpath with absolute path [Slow] [Conformance]
E2eNode Suite [sig-node] Variable Expansion should fail substituting values in a volume subpath with absolute path [Slow] [Conformance]
E2eNode Suite [sig-node] Variable Expansion should fail substituting values in a volume subpath with absolute path [Slow] [Conformance]
E2eNode Suite [sig-node] Variable Expansion should fail substituting values in a volume subpath with backticks [Slow] [Conformance]
E2eNode Suite [sig-node] Variable Expansion should fail substituting values in a volume subpath with backticks [Slow] [Conformance]
E2eNode Suite [sig-node] Variable Expansion should fail substituting values in a volume subpath with backticks [Slow] [Conformance]
E2eNode Suite [sig-node] Variable Expansion should succeed in writing subpaths in container [Slow] [Conformance]
E2eNode Suite [sig-node] Variable Expansion should succeed in writing subpaths in container [Slow] [Conformance]
E2eNode Suite [sig-node] Variable Expansion should succeed in writing subpaths in container [Slow] [Conformance]
E2eNode Suite [sig-node] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance]
E2eNode Suite [sig-node] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance]
E2eNode Suite [sig-node] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance]
E2eNode Suite [sig-node] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive] delete and recreate ConfigMap: error while ConfigMap is absent: status and events should match expectations
E2eNode Suite [sig-node] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive] delete and recreate ConfigMap: error while ConfigMap is absent: status and events should match expectations
E2eNode Suite [sig-node] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive] delete and recreate ConfigMap: error while ConfigMap is absent: status and events should match expectations
E2eNode Suite [sig-node] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive] delete and recreate ConfigMap: state transitions: status and events should match expectations
E2eNode Suite [sig-node] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive] delete and recreate ConfigMap: state transitions: status and events should match expectations
E2eNode Suite [sig-node] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive] delete and recreate ConfigMap: state transitions: status and events should match expectations
E2eNode Suite [sig-node] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive] update ConfigMap in-place: recover to last-known-good version: status and events should match expectations
E2eNode Suite [sig-node] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive] update ConfigMap in-place: recover to last-known-good version: status and events should match expectations
E2eNode Suite [sig-node] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive] update ConfigMap in-place: recover to last-known-good version: status and events should match expectations
E2eNode Suite [sig-node] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive] update ConfigMap in-place: state transitions: status and events should match expectations
E2eNode Suite [sig-node] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive] update ConfigMap in-place: state transitions: status and events should match expectations
E2eNode Suite [sig-node] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive] update ConfigMap in-place: state transitions: status and events should match expectations
E2eNode Suite [sig-node] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive] update Node.Spec.ConfigSource: 100 update stress test: status and events should match expectations
E2eNode Suite [sig-node] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive] update Node.Spec.ConfigSource: 100 update stress test: status and events should match expectations
E2eNode Suite [sig-node] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive] update Node.Spec.ConfigSource: 100 update stress test: status and events should match expectations
E2eNode Suite [sig-node] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive] update Node.Spec.ConfigSource: non-nil last-known-good to a new non-nil last-known-good status and events should match expectations
E2eNode Suite [sig-node] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive] update Node.Spec.ConfigSource: non-nil last-known-good to a new non-nil last-known-good status and events should match expectations
E2eNode Suite [sig-node] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive] update Node.Spec.ConfigSource: non-nil last-known-good to a new non-nil last-known-good status and events should match expectations
E2eNode Suite [sig-node] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive] update Node.Spec.ConfigSource: recover to last-known-good ConfigMap.KubeletConfigKey: status and events should match expectations
E2eNode Suite [sig-node] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive] update Node.Spec.ConfigSource: recover to last-known-good ConfigMap.KubeletConfigKey: status and events should match expectations
E2eNode Suite [sig-node] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive] update Node.Spec.ConfigSource: recover to last-known-good ConfigMap.KubeletConfigKey: status and events should match expectations
E2eNode Suite [sig-node] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive] update Node.Spec.ConfigSource: recover to last-known-good ConfigMap: status and events should match expectations
E2eNode Suite [sig-node] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive] update Node.Spec.ConfigSource: recover to last-known-good ConfigMap: status and events should match expectations
E2eNode Suite [sig-node] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive] update Node.Spec.ConfigSource: recover to last-known-good ConfigMap: status and events should match expectations
E2eNode Suite [sig-node] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive] update Node.Spec.ConfigSource: state transitions: status and events should match expectations
E2eNode Suite [sig-node] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive] update Node.Spec.ConfigSource: state transitions: status and events should match expectations
E2eNode Suite [sig-node] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive] update Node.Spec.ConfigSource: state transitions: status and events should match expectations
E2eNode Suite [sig-storage] ConfigMap Should fail non-optional pod creation due to configMap object does not exist [Slow]
E2eNode Suite [sig-storage] ConfigMap Should fail non-optional pod creation due to configMap object does not exist [Slow]
E2eNode Suite [sig-storage] ConfigMap Should fail non-optional pod creation due to configMap object does not exist [Slow]
E2eNode Suite [sig-storage] ConfigMap Should fail non-optional pod creation due to the key in the configMap object does not exist [Slow]
E2eNode Suite [sig-storage] ConfigMap Should fail non-optional pod creation due to the key in the configMap object does not exist [Slow]
E2eNode Suite [sig-storage] ConfigMap Should fail non-optional pod creation due to the key in the configMap object does not exist [Slow]
E2eNode Suite [sig-storage] ConfigMap should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
E2eNode Suite [sig-storage] ConfigMap should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
E2eNode Suite [sig-storage] ConfigMap should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
E2eNode Suite [sig-storage] ConfigMap should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup]
E2eNode Suite [sig-storage] ConfigMap should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup]
E2eNode Suite [sig-storage] ConfigMap should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup]
E2eNode Suite [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
E2eNode Suite [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
E2eNode Suite [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
E2eNode Suite [sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]
E2eNode Suite [sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]
E2eNode Suite [sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]
E2eNode Suite [sig-storage] Downward API [Serial] [Disruptive] [NodeFeature:EphemeralStorage] Downward API tests for local ephemeral storage should provide container's limits.ephemeral-storage and requests.ephemeral-storage as env vars
E2eNode Suite [sig-storage] Downward API [Serial] [Disruptive] [NodeFeature:EphemeralStorage] Downward API tests for local ephemeral storage should provide container's limits.ephemeral-storage and requests.ephemeral-storage as env vars
E2eNode Suite [sig-storage] Downward API [Serial] [Disruptive] [NodeFeature:EphemeralStorage] Downward API tests for local ephemeral storage should provide container's limits.ephemeral-storage and requests.ephemeral-storage as env vars
E2eNode Suite [sig-storage] Downward API [Serial] [Disruptive] [NodeFeature:EphemeralStorage] Downward API tests for local ephemeral storage should provide default limits.ephemeral-storage from node allocatable
E2eNode Suite [sig-storage] Downward API [Serial] [Disruptive] [NodeFeature:EphemeralStorage] Downward API tests for local ephemeral storage should provide default limits.ephemeral-storage from node allocatable
E2eNode Suite [sig-storage] Downward API [Serial] [Disruptive] [NodeFeature:EphemeralStorage] Downward API tests for local ephemeral storage should provide default limits.ephemeral-storage from node allocatable
E2eNode Suite [sig-storage] Downward API volume should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]
E2eNode Suite [sig-storage] Downward API volume should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]
E2eNode Suite [sig-storage] Downward API volume should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]
E2eNode Suite [sig-storage] Downward API volume should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup]
E2eNode Suite [sig-storage] Downward API volume should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup]
E2eNode Suite [sig-storage] Downward API volume should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup]
E2eNode Suite [sig-storage] EmptyDir volumes pod should support memory backed volumes of specified size
E2eNode Suite [sig-storage] EmptyDir volumes pod should support memory backed volumes of specified size
E2eNode Suite [sig-storage] EmptyDir volumes pod should support memory backed volumes of specified size
E2eNode Suite [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]
E2eNode Suite [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]
E2eNode Suite [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]
E2eNode Suite [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] files with FSGroup ownership should support (root,0644,tmpfs)
E2eNode Suite [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] files with FSGroup ownership should support (root,0644,tmpfs)
E2eNode Suite [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] files with FSGroup ownership should support (root,0644,tmpfs)
E2eNode Suite [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] new files should be created with FSGroup ownership when container is non-root
E2eNode Suite [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] new files should be created with FSGroup ownership when container is non-root
E2eNode Suite [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] new files should be created with FSGroup ownership when container is non-root
E2eNode Suite [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] new files should be created with FSGroup ownership when container is root
E2eNode Suite [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] new files should be created with FSGroup ownership when container is root
E2eNode Suite [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] new files should be created with FSGroup ownership when container is root
E2eNode Suite [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] nonexistent volume subPath should have the correct mode and owner using FSGroup
E2eNode Suite [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] nonexistent volume subPath should have the correct mode and owner using FSGroup
E2eNode Suite [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] nonexistent volume subPath should have the correct mode and owner using FSGroup
E2eNode Suite [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] volume on default medium should have the correct mode using FSGroup
E2eNode Suite [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] volume on default medium should have the correct mode using FSGroup
E2eNode Suite [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] volume on default medium should have the correct mode using FSGroup
E2eNode Suite [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] volume on tmpfs should have the correct mode using FSGroup
E2eNode Suite [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] volume on tmpfs should have the correct mode using FSGroup
E2eNode Suite [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] volume on tmpfs should have the correct mode using FSGroup
E2eNode Suite [sig-storage] Projected configMap Should fail non-optional pod creation due to configMap object does not exist [Slow]
E2eNode Suite [sig-storage] Projected configMap Should fail non-optional pod creation due to configMap object does not exist [Slow]
E2eNode Suite [sig-storage] Projected configMap Should fail non-optional pod creation due to configMap object does not exist [Slow]
E2eNode Suite [sig-storage] Projected configMap Should fail non-optional pod creation due to the key in the configMap object does not exist [Slow]
E2eNode Suite [sig-storage] Projected configMap Should fail non-optional pod creation due to the key in the configMap object does not exist [Slow]
E2eNode Suite [sig-storage] Projected configMap Should fail non-optional pod creation due to the key in the configMap object does not exist [Slow]
E2eNode Suite [sig-storage] Projected configMap should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
E2eNode Suite [sig-storage] Projected configMap should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
E2eNode Suite [sig-storage] Projected configMap should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
E2eNode Suite [sig-storage] Projected configMap should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup]
E2eNode Suite [sig-storage] Projected configMap should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup]
E2eNode Suite [sig-storage] Projected configMap should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup]
E2eNode Suite [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
E2eNode Suite [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
E2eNode Suite [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
E2eNode Suite [sig-storage] Projected downwardAPI should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]
E2eNode Suite [sig-storage] Projected downwardAPI should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]
E2eNode Suite [sig-storage] Projected downwardAPI should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]
E2eNode Suite [sig-storage] Projected downwardAPI should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup]
E2eNode Suite [sig-storage] Projected downwardAPI should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup]
E2eNode Suite [sig-storage] Projected downwardAPI should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup]
E2eNode Suite [sig-storage] Projected secret Should fail non-optional pod creation due to secret object does not exist [Slow]
E2eNode Suite [sig-storage] Projected secret Should fail non-optional pod creation due to secret object does not exist [Slow]
E2eNode Suite [sig-storage] Projected secret Should fail non-optional pod creation due to secret object does not exist [Slow]
E2eNode Suite [sig-storage] Projected secret Should fail non-optional pod creation due to the key in the secret object does not exist [Slow]
E2eNode Suite [sig-storage] Projected secret Should fail non-optional pod creation due to the key in the secret object does not exist [Slow]
E2eNode Suite [sig-storage] Projected secret Should fail non-optional pod creation due to the key in the secret object does not exist [Slow]
E2eNode Suite [sig-storage] Secrets Should fail non-optional pod creation due to secret object does not exist [Slow]
E2eNode Suite [sig-storage] Secrets Should fail non-optional pod creation due to secret object does not exist [Slow]
E2eNode Suite [sig-storage] Secrets Should fail non-optional pod creation due to secret object does not exist [Slow]
E2eNode Suite [sig-storage] Secrets Should fail non-optional pod creation due to the key in the secret object does not exist [Slow]
E2eNode Suite [sig-storage] Secrets Should fail non-optional pod creation due to the key in the secret object does not exist [Slow]
E2eNode Suite [sig-storage] Secrets Should fail non-optional pod creation due to the key in the secret object does not exist [Slow]
E2eNode Suite [sig-storage] Secrets should be immutable if `immutable` field is set [Conformance]
E2eNode Suite [sig-storage] Secrets should be immutable if `immutable` field is set [Conformance]
E2eNode Suite [sig-storage] Secrets should be immutable if `immutable` field is set [Conformance]
E2eNode Suite [sig-storage] Volumes GlusterFS should be mountable
E2eNode Suite [sig-storage] Volumes GlusterFS should be mountable
E2eNode Suite [sig-storage] Volumes GlusterFS should be mountable
E2eNode Suite [sig-storage] Volumes NFSv3 should be mountable for NFSv3
E2eNode Suite [sig-storage] Volumes NFSv3 should be mountable for NFSv3
E2eNode Suite [sig-storage] Volumes NFSv3 should be mountable for NFSv3
E2eNode Suite [sig-storage] Volumes NFSv4 should be mountable for NFSv4
E2eNode Suite [sig-storage] Volumes NFSv4 should be mountable for NFSv4
E2eNode Suite [sig-storage] Volumes NFSv4 should be mountable for NFSv4