PR | jorge-gasca: Recreate target group when externalTrafficPolicy changes to local |
Result | SUCCESS |
Tests | 1 failed / 473 succeeded |
Started | |
Elapsed | 18m18s |
Revision | |
Builder | gke-prow-ssd-pool-1a225945-f0q6 |
Refs |
master:d87c921a 84678:3d2b83d3 |
pod | e39b08a4-0cea-11ea-bb11-0a04a03f6314 |
infra-commit | 4ab1254b1 |
job-version | v1.18.0-alpha.0.1114+e3c0e7deb5659a |
pod | e39b08a4-0cea-11ea-bb11-0a04a03f6314 |
repo | k8s.io/kubernetes |
repo-commit | e3c0e7deb5659a739231e694088a2c8ab1b32cec |
repos | {u'k8s.io/kubernetes': u'master:d87c921a516a5b0387269910cef8551fae62de7f,84678:3d2b83d331a58ba022b39e3f43b10719f2844923'} |
revision | v1.18.0-alpha.0.1114+e3c0e7deb5659a |
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=E2eNode\sSuite\s\[sig\-storage\]\sHostPath\sshould\ssupport\sr\/w\s\[NodeConformance\]$'
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:65 Unexpected error: <*errors.errorString | 0xc001116030>: { s: "expected pod \"pod-host-path-test\" success: pod \"pod-host-path-test\" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-11-22 05:54:57 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-11-22 05:54:57 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [test-container-1 test-container-2]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-11-22 05:54:57 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [test-container-1 test-container-2]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-11-22 05:54:57 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.138.0.41 PodIP:10.100.0.106 PodIPs:[{IP:10.100.0.106}] StartTime:2019-11-22 05:54:57 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:test-container-1 State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2019-11-22 05:54:57 +0000 UTC,FinishedAt:2019-11-22 05:54:57 +0000 UTC,ContainerID:docker://c058683eb8347416f4231ead1d144c4cc870c58a2307b028c3b149c5e300ff66,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:gcr.io/kubernetes-e2e-test-images/mounttest:1.0 ImageID:docker-pullable://gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 ContainerID:docker://c058683eb8347416f4231ead1d144c4cc870c58a2307b028c3b149c5e300ff66 Started:0xc0008ddc19} {Name:test-container-2 State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:1,Signal:0,Reason:Error,Message:,StartedAt:2019-11-22 05:54:57 +0000 UTC,FinishedAt:2019-11-22 05:57:58 +0000 UTC,ContainerID:docker://6055db4a8b3a9e6f9415741b7ba1af41a332388b91cbe11c3428560020820b18,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:gcr.io/kubernetes-e2e-test-images/mounttest:1.0 ImageID:docker-pullable://gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 ContainerID:docker://6055db4a8b3a9e6f9415741b7ba1af41a332388b91cbe11c3428560020820b18 Started:0xc0008ddc1f}] QOSClass:BestEffort EphemeralContainerStatuses:[]}", } expected pod "pod-host-path-test" success: pod "pod-host-path-test" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-11-22 05:54:57 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-11-22 05:54:57 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [test-container-1 test-container-2]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-11-22 05:54:57 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [test-container-1 test-container-2]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-11-22 05:54:57 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.138.0.41 PodIP:10.100.0.106 PodIPs:[{IP:10.100.0.106}] StartTime:2019-11-22 05:54:57 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:test-container-1 State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2019-11-22 05:54:57 +0000 UTC,FinishedAt:2019-11-22 05:54:57 +0000 UTC,ContainerID:docker://c058683eb8347416f4231ead1d144c4cc870c58a2307b028c3b149c5e300ff66,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:gcr.io/kubernetes-e2e-test-images/mounttest:1.0 ImageID:docker-pullable://gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 ContainerID:docker://c058683eb8347416f4231ead1d144c4cc870c58a2307b028c3b149c5e300ff66 Started:0xc0008ddc19} {Name:test-container-2 State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:1,Signal:0,Reason:Error,Message:,StartedAt:2019-11-22 05:54:57 +0000 UTC,FinishedAt:2019-11-22 05:57:58 +0000 UTC,ContainerID:docker://6055db4a8b3a9e6f9415741b7ba1af41a332388b91cbe11c3428560020820b18,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:gcr.io/kubernetes-e2e-test-images/mounttest:1.0 ImageID:docker-pullable://gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 ContainerID:docker://6055db4a8b3a9e6f9415741b7ba1af41a332388b91cbe11c3428560020820b18 Started:0xc0008ddc1f}] QOSClass:BestEffort EphemeralContainerStatuses:[]} occurred /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:894from junit_cos-stable1_06.xml
[BeforeEach] [sig-storage] HostPath /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 �[1mSTEP�[0m: Creating a kubernetes client �[1mSTEP�[0m: Building a namespace api object, basename hostpath Nov 22 05:54:57.058: INFO: Skipping waiting for service account [BeforeEach] [sig-storage] HostPath /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should support r/w [NodeConformance] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:65 �[1mSTEP�[0m: Creating a pod to test hostPath r/w Nov 22 05:54:57.068: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-69" to be "success or failure" Nov 22 05:54:57.071: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.712042ms Nov 22 05:54:59.076: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008345493s Nov 22 05:55:01.089: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.02105204s Nov 22 05:55:03.092: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.023600654s Nov 22 05:55:05.100: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 8.032315769s Nov 22 05:55:07.156: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 10.087759423s Nov 22 05:55:09.158: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 12.08995059s Nov 22 05:55:11.161: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 14.093211754s Nov 22 05:55:13.182: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 16.113984567s Nov 22 05:55:15.185: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 18.116587982s Nov 22 05:55:17.186: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 20.118057243s Nov 22 05:55:19.188: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 22.119729032s Nov 22 05:55:21.224: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 24.156405217s Nov 22 05:55:23.227: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 26.158660359s Nov 22 05:55:25.229: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 28.160432223s Nov 22 05:55:27.231: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 30.162775544s Nov 22 05:55:29.233: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 32.165162994s Nov 22 05:55:31.236: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 34.16755639s Nov 22 05:55:33.239: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 36.170495786s Nov 22 05:55:35.241: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 38.172990172s Nov 22 05:55:37.244: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 40.175928481s Nov 22 05:55:39.246: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 42.178154953s Nov 22 05:55:41.248: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 44.180108625s Nov 22 05:55:43.250: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 46.182178924s Nov 22 05:55:45.278: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 48.210305338s Nov 22 05:55:47.281: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 50.212699292s Nov 22 05:55:49.283: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 52.215122464s Nov 22 05:55:51.285: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 54.21718128s Nov 22 05:55:53.291: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 56.222456788s Nov 22 05:55:55.300: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 58.232037716s Nov 22 05:55:57.302: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 1m0.234368036s Nov 22 05:55:59.308: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 1m2.239503916s Nov 22 05:56:01.311: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 1m4.24243469s Nov 22 05:56:03.313: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 1m6.244548062s Nov 22 05:56:05.315: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 1m8.246759449s Nov 22 05:56:07.320: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 1m10.251550099s Nov 22 05:56:09.322: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 1m12.253759601s Nov 22 05:56:11.328: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 1m14.259924702s Nov 22 05:56:13.330: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 1m16.261891418s Nov 22 05:56:15.332: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 1m18.263654247s Nov 22 05:56:17.334: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 1m20.26545957s Nov 22 05:56:19.336: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 1m22.267603988s Nov 22 05:56:21.355: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 1m24.287164658s Nov 22 05:56:23.357: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 1m26.289315038s Nov 22 05:56:25.372: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 1m28.304344904s Nov 22 05:56:27.403: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 1m30.335125112s Nov 22 05:56:29.406: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 1m32.337573354s Nov 22 05:56:31.408: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 1m34.340149019s Nov 22 05:56:33.422: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 1m36.353774766s Nov 22 05:56:35.424: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 1m38.355903351s Nov 22 05:56:37.426: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 1m40.358016442s Nov 22 05:56:39.428: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 1m42.360266978s Nov 22 05:56:41.431: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 1m44.36253303s Nov 22 05:56:43.433: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 1m46.364679122s Nov 22 05:56:45.437: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 1m48.369398393s Nov 22 05:56:47.439: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 1m50.371397096s Nov 22 05:56:49.442: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 1m52.373476874s Nov 22 05:56:51.443: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 1m54.375284419s Nov 22 05:56:53.446: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 1m56.377626214s Nov 22 05:56:55.448: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 1m58.379766342s Nov 22 05:56:57.453: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 2m0.384872753s Nov 22 05:56:59.455: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 2m2.386919115s Nov 22 05:57:01.460: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 2m4.392167742s Nov 22 05:57:03.462: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 2m6.39442453s Nov 22 05:57:05.465: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 2m8.39646496s Nov 22 05:57:07.467: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 2m10.398533075s Nov 22 05:57:09.477: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 2m12.409406613s Nov 22 05:57:11.480: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 2m14.411480965s Nov 22 05:57:13.481: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 2m16.4133016s Nov 22 05:57:15.483: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 2m18.415308323s Nov 22 05:57:17.485: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 2m20.417305191s Nov 22 05:57:19.487: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 2m22.419199467s Nov 22 05:57:21.489: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 2m24.421303069s Nov 22 05:57:23.491: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 2m26.423315043s Nov 22 05:57:25.495: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 2m28.426969242s Nov 22 05:57:27.498: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 2m30.429749748s Nov 22 05:57:29.527: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 2m32.45864874s Nov 22 05:57:31.531: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 2m34.462448636s Nov 22 05:57:33.532: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 2m36.464250434s Nov 22 05:57:35.534: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 2m38.466014981s Nov 22 05:57:37.536: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 2m40.468130572s Nov 22 05:57:39.538: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 2m42.470425272s Nov 22 05:57:41.541: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 2m44.473151578s Nov 22 05:57:43.544: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 2m46.475467222s Nov 22 05:57:45.546: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 2m48.47762917s Nov 22 05:57:47.548: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 2m50.479961072s Nov 22 05:57:49.550: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 2m52.48210204s Nov 22 05:57:51.552: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 2m54.484226238s Nov 22 05:57:53.554: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 2m56.486358592s Nov 22 05:57:55.556: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 2m58.488171279s Nov 22 05:57:57.558: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 3m0.489999995s Nov 22 05:57:59.560: INFO: Pod "pod-host-path-test": Phase="Failed", Reason="", readiness=false. Elapsed: 3m2.492249878s Nov 22 05:57:59.571: INFO: Output of node "tmp-node-e2e-1112e945-cos-stable-63-10032-71-0" pod "pod-host-path-test" container "test-container-1": content of file "/test-volume/test-file": mount-tester new file mode of file "/test-volume/test-file": -rw-r--r-- Nov 22 05:57:59.580: INFO: Output of node "tmp-node-e2e-1112e945-cos-stable-63-10032-71-0" pod "pod-host-path-test" container "test-container-2": Error reading file /test-volume/test-file: open /test-volume/test-file: no such file or directory, retrying Error reading file /test-volume/test-file: open /test-volume/test-file: no such file or directory, retrying Error reading file /test-volume/test-file: open /test-volume/test-file: no such file or directory, retrying Error reading file /test-volume/test-file: open /test-volume/test-file: no such file or directory, retrying Error reading file /test-volume/test-file: open /test-volume/test-file: no such file or directory, retrying Error reading file /test-volume/test-file: open /test-volume/test-file: no such file or directory, retrying Error reading file /test-volume/test-file: open /test-volume/test-file: no such file or directory, retrying Error reading file /test-volume/test-file: open /test-volume/test-file: no such file or directory, retrying Error reading file /test-volume/test-file: open /test-volume/test-file: no such file or directory, retrying Error reading file /test-volume/test-file: open /test-volume/test-file: no such file or directory, retrying Error reading file /test-volume/test-file: open /test-volume/test-file: no such file or directory, retrying Error reading file /test-volume/test-file: open /test-volume/test-file: no such file or directory, retrying Error reading file /test-volume/test-file: open /test-volume/test-file: no such file or directory, retrying Error reading file /test-volume/test-file: open /test-volume/test-file: no such file or directory, retrying Error reading file /test-volume/test-file: open /test-volume/test-file: no such file or directory, retrying Error reading file /test-volume/test-file: open /test-volume/test-file: no such file or directory, retrying Error reading file /test-volume/test-file: open /test-volume/test-file: no such file or directory, retrying Error reading file /test-volume/test-file: open /test-volume/test-file: no such file or directory, retrying Error reading file /test-volume/test-file: open /test-volume/test-file: no such file or directory, retrying Error reading file /test-volume/test-file: open /test-volume/test-file: no such file or directory, retrying Error reading file /test-volume/test-file: open /test-volume/test-file: no such file or directory, retrying Error reading file /test-volume/test-file: open /test-volume/test-file: no such file or directory, retrying Error reading file /test-volume/test-file: open /test-volume/test-file: no such file or directory, retrying Error reading file /test-volume/test-file: open /test-volume/test-file: no such file or directory, retrying Error reading file /test-volume/test-file: open /test-volume/test-file: no such file or directory, retrying Error reading file /test-volume/test-file: open /test-volume/test-file: no such file or directory, retrying Error reading file /test-volume/test-file: open /test-volume/test-file: no such file or directory, retrying Error reading file /test-volume/test-file: open /test-volume/test-file: no such file or directory, retrying Error reading file /test-volume/test-file: open /test-volume/test-file: no such file or directory, retrying Error reading file /test-volume/test-file: open /test-volume/test-file: no such file or directory, retrying Error reading file /test-volume/test-file: open /test-volume/test-file: no such file or directory, retrying Error reading file /test-volume/test-file: open /test-volume/test-file: no such file or directory, retrying Error reading file /test-volume/test-file: open /test-volume/test-file: no such file or directory, retrying Error reading file /test-volume/test-file: open /test-volume/test-file: no such file or directory, retrying Error reading file /test-volume/test-file: open /test-volume/test-file: no such file or directory, retrying Error reading file /test-volume/test-file: open /test-volume/test-file: no such file or directory, retrying Error reading file /test-volume/test-file: open /test-volume/test-file: no such file or directory, retrying Error reading file /test-volume/test-file: open /test-volume/test-file: no such file or directory, retrying Error reading file /test-volume/test-file: open /test-volume/test-file: no such file or directory, retrying Error reading file /test-volume/test-file: open /test-volume/test-file: no such file or directory, retrying Error reading file /test-volume/test-file: open /test-volume/test-file: no such file or directory, retrying Error reading file /test-volume/test-file: open /test-volume/test-file: no such file or directory, retrying Error reading file /test-volume/test-file: open /test-volume/test-file: no such file or directory, retrying Error reading file /test-volume/test-file: open /test-volume/test-file: no such file or directory, retrying Error reading file /test-volume/test-file: open /test-volume/test-file: no such file or directory, retrying Error reading file /test-volume/test-file: open /test-volume/test-file: no such file or directory, retrying Error reading file /test-volume/test-file: open /test-volume/test-file: no such file or directory, retrying Error reading file /test-volume/test-file: open /test-volume/test-file: no such file or directory, retrying Error reading file /test-volume/test-file: open /test-volume/test-file: no such file or directory, retrying Error reading file /test-volume/test-file: open /test-volume/test-file: no such file or directory, retrying Error reading file /test-volume/test-file: open /test-volume/test-file: no such file or directory, retrying Error reading file /test-volume/test-file: open /test-volume/test-file: no such file or directory, retrying Error reading file /test-volume/test-file: open /test-volume/test-file: no such file or directory, retrying Error reading file /test-volume/test-file: open /test-volume/test-file: no such file or directory, retrying Error reading file /test-volume/test-file: open /test-volume/test-file: no such file or directory, retrying Error reading file /test-volume/test-file: open /test-volume/test-file: no such file or directory, retrying Error reading file /test-volume/test-file: open /test-volume/test-file: no such file or directory, retrying Error reading file /test-volume/test-file: open /test-volume/test-file: no such file or directory, retrying Error reading file /test-volume/test-file: open /test-volume/test-file: no such file or directory, retrying Error reading file /test-volume/test-file: open /test-volume/test-file: no such file or directory, retrying Error reading file /test-volume/test-file: open /test-volume/test-file: no such file or directory, retrying Error reading file /test-volume/test-file: open /test-volume/test-file: no such file or directory, retrying Error reading file /test-volume/test-file: open /test-volume/test-file: no such file or directory, retrying Error reading file /test-volume/test-file: open /test-volume/test-file: no such file or directory, retrying Error reading file /test-volume/test-file: open /test-volume/test-file: no such file or directory, retrying Error reading file /test-volume/test-file: open /test-volume/test-file: no such file or directory, retrying Error reading file /test-volume/test-file: open /test-volume/test-file: no such file or directory, retrying Error reading file /test-volume/test-file: open /test-volume/test-file: no such file or directory, retrying Error reading file /test-volume/test-file: open /test-volume/test-file: no such file or directory, retrying Error reading file /test-volume/test-file: open /test-volume/test-file: no such file or directory, retrying Error reading file /test-volume/test-file: open /test-volume/test-file: no such file or directory, retrying Error reading file /test-volume/test-file: open /test-volume/test-file: no such file or directory, retrying Error reading file /test-volume/test-file: open /test-volume/test-file: no such file or directory, retrying Error reading file /test-volume/test-file: open /test-volume/test-file: no such file or directory, retrying Error reading file /test-volume/test-file: open /test-volume/test-file: no such file or directory, retrying Error reading file /test-volume/test-file: open /test-volume/test-file: no such file or directory, retrying Error reading file /test-volume/test-file: open /test-volume/test-file: no such file or directory, retrying Error reading file /test-volume/test-file: open /test-volume/test-file: no such file or directory, retrying Error reading file /test-volume/test-file: open /test-volume/test-file: no such file or directory, retrying Error reading file /test-volume/test-file: open /test-volume/test-file: no such file or directory, retrying Error reading file /test-volume/test-file: open /test-volume/test-file: no such file or directory, retrying Error reading file /test-volume/test-file: open /test-volume/test-file: no such file or directory, retrying Error reading file /test-volume/test-file: open /test-volume/test-file: no such file or directory, retrying Error reading file /test-volume/test-file: open /test-volume/test-file: no such file or directory, retrying Error reading file /test-volume/test-file: open /test-volume/test-file: no such file or directory, retrying Error reading file /test-volume/test-file: open /test-volume/test-file: no such file or directory, retrying Error reading file /test-volume/test-file: open /test-volume/test-file: no such file or directory, retrying Error reading file /test-volume/test-file: open /test-volume/test-file: no such file or directory, retrying Error reading file /test-volume/test-file: open /test-volume/test-file: no such file or directory, retrying Error reading file /test-volume/test-file: open /test-volume/test-file: no such file or directory, retrying �[1mSTEP�[0m: delete the pod Nov 22 05:57:59.593: INFO: Waiting for pod pod-host-path-test to disappear Nov 22 05:57:59.597: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 �[1mSTEP�[0m: Collecting events from namespace "hostpath-69". �[1mSTEP�[0m: Found 6 events. Nov 22 05:57:59.607: INFO: At 2019-11-22 05:54:57 +0000 UTC - event for pod-host-path-test: {kubelet tmp-node-e2e-1112e945-cos-stable-63-10032-71-0} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/mounttest:1.0" already present on machine Nov 22 05:57:59.607: INFO: At 2019-11-22 05:54:57 +0000 UTC - event for pod-host-path-test: {kubelet tmp-node-e2e-1112e945-cos-stable-63-10032-71-0} Created: Created container test-container-1 Nov 22 05:57:59.607: INFO: At 2019-11-22 05:54:57 +0000 UTC - event for pod-host-path-test: {kubelet tmp-node-e2e-1112e945-cos-stable-63-10032-71-0} Started: Started container test-container-1 Nov 22 05:57:59.607: INFO: At 2019-11-22 05:54:57 +0000 UTC - event for pod-host-path-test: {kubelet tmp-node-e2e-1112e945-cos-stable-63-10032-71-0} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/mounttest:1.0" already present on machine Nov 22 05:57:59.607: INFO: At 2019-11-22 05:54:57 +0000 UTC - event for pod-host-path-test: {kubelet tmp-node-e2e-1112e945-cos-stable-63-10032-71-0} Created: Created container test-container-2 Nov 22 05:57:59.607: INFO: At 2019-11-22 05:54:58 +0000 UTC - event for pod-host-path-test: {kubelet tmp-node-e2e-1112e945-cos-stable-63-10032-71-0} Started: Started container test-container-2 Nov 22 05:57:59.613: INFO: POD NODE PHASE GRACE CONDITIONS Nov 22 05:57:59.613: INFO: Nov 22 05:57:59.617: INFO: Logging node info for node tmp-node-e2e-1112e945-cos-stable-63-10032-71-0 Nov 22 05:57:59.621: INFO: Node Info: &Node{ObjectMeta:{tmp-node-e2e-1112e945-cos-stable-63-10032-71-0 /api/v1/nodes/tmp-node-e2e-1112e945-cos-stable-63-10032-71-0 1b24daa4-279c-4733-bbdc-b6877619499d 812 0 2019-11-22 05:52:08 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:tmp-node-e2e-1112e945-cos-stable-63-10032-71-0 kubernetes.io/os:linux] map[volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16684785664 0} {<nil>} BinarySI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3885461504 0} {<nil>} 3794396Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{15016307073 0} {<nil>} 15016307073 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3623317504 0} {<nil>} 3538396Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2019-11-22 05:53:09 +0000 UTC,LastTransitionTime:2019-11-22 05:52:03 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2019-11-22 05:53:09 +0000 UTC,LastTransitionTime:2019-11-22 05:52:03 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2019-11-22 05:53:09 +0000 UTC,LastTransitionTime:2019-11-22 05:52:03 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2019-11-22 05:53:09 +0000 UTC,LastTransitionTime:2019-11-22 05:52:08 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.41,},NodeAddress{Type:Hostname,Address:tmp-node-e2e-1112e945-cos-stable-63-10032-71-0,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:7194bce6d724bf7f1793d19c818a406e,SystemUUID:7194BCE6-D724-BF7F-1793-D19C818A406E,BootID:b02b7a34-0b10-4795-b61d-ba1a7c684e5c,KernelVersion:4.4.86+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:docker://17.3.2,KubeletVersion:v1.18.0-alpha.0.1114+e3c0e7deb5659a,KubeProxyVersion:v1.18.0-alpha.0.1114+e3c0e7deb5659a,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[perl@sha256:4836ccdaa3d58be2f36c570f7473dfd55869db6162bfd4f30e4b4d62faaee6e1 perl:5.26],SizeBytes:852914604,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/node-perf/tf-wide-deep-amd64@sha256:80d4564d5ab49ecfea3b20f75cc676d8dfd8b2aca364ed4c1a8a55fbcaaed7f6 gcr.io/kubernetes-e2e-test-images/node-perf/tf-wide-deep-amd64:1.0],SizeBytes:634170972,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/volume/gluster@sha256:e2d3308b2d27499d59f120ff46dfc6c4cb307a3f207f02894ecab902583761c9 gcr.io/kubernetes-e2e-test-images/volume/gluster:1.0],SizeBytes:332011484,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/volume/nfs@sha256:c2ad734346f608a5f7d69cfded93c4e8094069320657bd372d12ba21dea3ea71 gcr.io/kubernetes-e2e-test-images/volume/nfs:1.0],SizeBytes:225358913,},ContainerImage{Names:[httpd@sha256:6feb0ea7b0967367da66e8d58ba813fde32bdb92f63bfc21a9e170d211539db4 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/node-problem-detector@sha256:6e9b4a4eaa47f120be61f60573a545844de63401661812e2cfb7ae81a28efd19 k8s.gcr.io/node-problem-detector:v0.6.2],SizeBytes:98707739,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/node-perf/npb-is@sha256:9d08dd99565b25af37c990cd4474a4284b27e7ceb3f98328bb481edefedf8aa5 gcr.io/kubernetes-e2e-test-images/node-perf/npb-is:1.0],SizeBytes:96288249,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/node-perf/npb-ep@sha256:564314549347619cfcdbe6c7d042a29e133a00e922b37682890fff17ac1a7804 gcr.io/kubernetes-e2e-test-images/node-perf/npb-ep:1.0],SizeBytes:96286449,},ContainerImage{Names:[google/cadvisor@sha256:815386ebbe9a3490f38785ab11bda34ec8dacf4634af77b8912832d4f85dca04 google/cadvisor:latest],SizeBytes:69583040,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5 gcr.io/kubernetes-e2e-test-images/agnhost:2.8],SizeBytes:52800335,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonroot@sha256:d4ede5c74517090b6686219059118ed178cf4620f5db8781b32f806bb1e7395b gcr.io/kubernetes-e2e-test-images/nonroot:1.0],SizeBytes:42321438,},ContainerImage{Names:[k8s.gcr.io/nvidia-gpu-device-plugin@sha256:4b036e8844920336fa48f36edeb7d4398f426d6a934ba022848deed2edbf09aa],SizeBytes:18981551,},ContainerImage{Names:[nginx@sha256:a3a0c4126587884f8d3090efca87f5af075d7e7ac8308cffc09a5a082d5f4760 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/ipc-utils@sha256:bb127be3a1ecac0516f672a5e223d94fe6021021534ecb7a02a607a63154c3d8 gcr.io/kubernetes-e2e-test-images/ipc-utils:1.0],SizeBytes:10039224,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:6757579,},ContainerImage{Names:[k8s.gcr.io/stress@sha256:f00aa1ddc963a3164aef741aab0fc05074ea96de6cd7e0d10077cf98dd72d594 k8s.gcr.io/stress:v1],SizeBytes:5494760,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0],SizeBytes:4732240,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:4206620,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0],SizeBytes:1450451,},ContainerImage{Names:[busybox@sha256:e004c2cc521c95383aebb1fb5893719aa7a8eae2e7a71f316a4410784edb00a9 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[k8s.gcr.io/busybox@sha256:4bdd623e848417d96127e16037743f0cd8b528c026e9175e22a84f639eca58ff],SizeBytes:1113554,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:&NodeConfigStatus{Assigned:nil,Active:nil,LastKnownGood:nil,Error:,},},} Nov 22 05:57:59.623: INFO: Logging kubelet events for node tmp-node-e2e-1112e945-cos-stable-63-10032-71-0 Nov 22 05:57:59.627: INFO: Logging pods the kubelet thinks is on node tmp-node-e2e-1112e945-cos-stable-63-10032-71-0 Nov 22 05:57:59.631: INFO: stats-busybox-0 started at 2019-11-22 05:57:08 +0000 UTC (0+1 container statuses recorded) Nov 22 05:57:59.631: INFO: Container busybox-container ready: true, restart count 1 Nov 22 05:57:59.631: INFO: stats-busybox-1 started at 2019-11-22 05:57:08 +0000 UTC (0+1 container statuses recorded) Nov 22 05:57:59.631: INFO: Container busybox-container ready: true, restart count 1 Nov 22 05:57:59.631: INFO: busybox-881d7787-5dbe-4bf5-9d05-f128b6ede1ce started at 2019-11-22 05:56:29 +0000 UTC (0+1 container statuses recorded) Nov 22 05:57:59.631: INFO: Container busybox ready: true, restart count 0 Nov 22 05:57:59.631: INFO: image-pull-test2722b74e-27a6-4e28-8750-9f7fea6dc437 started at 2019-11-22 05:53:59 +0000 UTC (0+1 container statuses recorded) Nov 22 05:57:59.631: INFO: Container image-pull-test ready: false, restart count 0 Nov 22 05:57:59.631: INFO: liveness-9ad89f0a-eccb-426e-ad4f-4da4abb53dcb started at 2019-11-22 05:57:28 +0000 UTC (0+1 container statuses recorded) Nov 22 05:57:59.631: INFO: Container liveness ready: true, restart count 1 W1122 05:57:59.633192 1316 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Nov 22 05:57:59.668: INFO: Latency metrics for node tmp-node-e2e-1112e945-cos-stable-63-10032-71-0 Nov 22 05:57:59.668: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "hostpath-69" for this suite.
Find pod-host-path-test mentions in log files | View test history on testgrid
Deferred TearDown
DumpClusterLogs
E2eNode Suite [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Container Runtime Conformance Test container runtime conformance blackbox test when running a container with a new image should be able to pull from private registry with credential provider [NodeConformance]
E2eNode Suite [k8s.io] Container Runtime Conformance Test container runtime conformance blackbox test when running a container with a new image should be able to pull from private registry with credential provider [NodeConformance]
E2eNode Suite [k8s.io] Container Runtime Conformance Test container runtime conformance blackbox test when running a container with a new image should be able to pull from private registry with credential provider [NodeConformance]
E2eNode Suite [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set [NodeConformance]
E2eNode Suite [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set [NodeConformance]
E2eNode Suite [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set [NodeConformance]
E2eNode Suite [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Container Runtime blackbox test when running a container with a new image should be able to pull from private registry with secret [NodeConformance]
E2eNode Suite [k8s.io] Container Runtime blackbox test when running a container with a new image should be able to pull from private registry with secret [NodeConformance]
E2eNode Suite [k8s.io] Container Runtime blackbox test when running a container with a new image should be able to pull from private registry with secret [NodeConformance]
E2eNode Suite [k8s.io] Container Runtime blackbox test when running a container with a new image should be able to pull image [NodeConformance]
E2eNode Suite [k8s.io] Container Runtime blackbox test when running a container with a new image should be able to pull image [NodeConformance]
E2eNode Suite [k8s.io] Container Runtime blackbox test when running a container with a new image should be able to pull image [NodeConformance]
E2eNode Suite [k8s.io] Container Runtime blackbox test when running a container with a new image should not be able to pull from private registry without secret [NodeConformance]
E2eNode Suite [k8s.io] Container Runtime blackbox test when running a container with a new image should not be able to pull from private registry without secret [NodeConformance]
E2eNode Suite [k8s.io] Container Runtime blackbox test when running a container with a new image should not be able to pull from private registry without secret [NodeConformance]
E2eNode Suite [k8s.io] Container Runtime blackbox test when running a container with a new image should not be able to pull image from invalid registry [NodeConformance]
E2eNode Suite [k8s.io] Container Runtime blackbox test when running a container with a new image should not be able to pull image from invalid registry [NodeConformance]
E2eNode Suite [k8s.io] Container Runtime blackbox test when running a container with a new image should not be able to pull image from invalid registry [NodeConformance]
E2eNode Suite [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] ContainerLogPath [NodeConformance] Pod with a container printed log to stdout should print log to correct cri log path
E2eNode Suite [k8s.io] ContainerLogPath [NodeConformance] Pod with a container printed log to stdout should print log to correct cri log path
E2eNode Suite [k8s.io] ContainerLogPath [NodeConformance] Pod with a container printed log to stdout should print log to correct cri log path
E2eNode Suite [k8s.io] ContainerLogPath [NodeConformance] Pod with a container printed log to stdout should print log to correct log path
E2eNode Suite [k8s.io] ContainerLogPath [NodeConformance] Pod with a container printed log to stdout should print log to correct log path
E2eNode Suite [k8s.io] ContainerLogPath [NodeConformance] Pod with a container printed log to stdout should print log to correct log path
E2eNode Suite [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]
E2eNode Suite [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]
E2eNode Suite [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]
E2eNode Suite [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]
E2eNode Suite [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]
E2eNode Suite [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]
E2eNode Suite [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
E2eNode Suite [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
E2eNode Suite [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
E2eNode Suite [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
E2eNode Suite [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
E2eNode Suite [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
E2eNode Suite [k8s.io] Kubelet Cgroup Manager Pod containers [NodeConformance] On scheduling a BestEffort Pod Pod containers should have been created under the BestEffort cgroup
E2eNode Suite [k8s.io] Kubelet Cgroup Manager Pod containers [NodeConformance] On scheduling a BestEffort Pod Pod containers should have been created under the BestEffort cgroup
E2eNode Suite [k8s.io] Kubelet Cgroup Manager Pod containers [NodeConformance] On scheduling a BestEffort Pod Pod containers should have been created under the BestEffort cgroup
E2eNode Suite [k8s.io] Kubelet Cgroup Manager Pod containers [NodeConformance] On scheduling a Burstable Pod Pod containers should have been created under the Burstable cgroup
E2eNode Suite [k8s.io] Kubelet Cgroup Manager Pod containers [NodeConformance] On scheduling a Burstable Pod Pod containers should have been created under the Burstable cgroup
E2eNode Suite [k8s.io] Kubelet Cgroup Manager Pod containers [NodeConformance] On scheduling a Burstable Pod Pod containers should have been created under the Burstable cgroup
E2eNode Suite [k8s.io] Kubelet Cgroup Manager Pod containers [NodeConformance] On scheduling a Guaranteed Pod Pod containers should have been created under the cgroup-root
E2eNode Suite [k8s.io] Kubelet Cgroup Manager Pod containers [NodeConformance] On scheduling a Guaranteed Pod Pod containers should have been created under the cgroup-root
E2eNode Suite [k8s.io] Kubelet Cgroup Manager Pod containers [NodeConformance] On scheduling a Guaranteed Pod Pod containers should have been created under the cgroup-root
E2eNode Suite [k8s.io] Kubelet Cgroup Manager QOS containers On enabling QOS cgroup hierarchy Top level QoS containers should have been created [NodeConformance]
E2eNode Suite [k8s.io] Kubelet Cgroup Manager QOS containers On enabling QOS cgroup hierarchy Top level QoS containers should have been created [NodeConformance]
E2eNode Suite [k8s.io] Kubelet Cgroup Manager QOS containers On enabling QOS cgroup hierarchy Top level QoS containers should have been created [NodeConformance]
E2eNode Suite [k8s.io] Kubelet Volume Manager Volume Manager On terminatation of pod with memory backed volume should remove the volume from the node [NodeConformance]
E2eNode Suite [k8s.io] Kubelet Volume Manager Volume Manager On terminatation of pod with memory backed volume should remove the volume from the node [NodeConformance]
E2eNode Suite [k8s.io] Kubelet Volume Manager Volume Manager On terminatation of pod with memory backed volume should remove the volume from the node [NodeConformance]
E2eNode Suite [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] MirrorPod when create a mirror pod should be recreated when mirror pod forcibly deleted [NodeConformance]
E2eNode Suite [k8s.io] MirrorPod when create a mirror pod should be recreated when mirror pod forcibly deleted [NodeConformance]
E2eNode Suite [k8s.io] MirrorPod when create a mirror pod should be recreated when mirror pod forcibly deleted [NodeConformance]
E2eNode Suite [k8s.io] MirrorPod when create a mirror pod should be recreated when mirror pod gracefully deleted [NodeConformance]
E2eNode Suite [k8s.io] MirrorPod when create a mirror pod should be recreated when mirror pod gracefully deleted [NodeConformance]
E2eNode Suite [k8s.io] MirrorPod when create a mirror pod should be recreated when mirror pod gracefully deleted [NodeConformance]
E2eNode Suite [k8s.io] MirrorPod when create a mirror pod should be updated when static pod updated [NodeConformance]
E2eNode Suite [k8s.io] MirrorPod when create a mirror pod should be updated when static pod updated [NodeConformance]
E2eNode Suite [k8s.io] MirrorPod when create a mirror pod should be updated when static pod updated [NodeConformance]
E2eNode Suite [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Pods should be updated [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Pods should be updated [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Pods should be updated [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] PrivilegedPod [NodeConformance] should enable privileged commands [LinuxOnly]
E2eNode Suite [k8s.io] PrivilegedPod [NodeConformance] should enable privileged commands [LinuxOnly]
E2eNode Suite [k8s.io] PrivilegedPod [NodeConformance] should enable privileged commands [LinuxOnly]
E2eNode Suite [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance]
E2eNode Suite [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance]
E2eNode Suite [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance]
E2eNode Suite [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 0 [LinuxOnly] [NodeConformance]
E2eNode Suite [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 0 [LinuxOnly] [NodeConformance]
E2eNode Suite [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 0 [LinuxOnly] [NodeConformance]
E2eNode Suite [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance]
E2eNode Suite [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance]
E2eNode Suite [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance]
E2eNode Suite [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance]
E2eNode Suite [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance]
E2eNode Suite [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance]
E2eNode Suite [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when true [LinuxOnly] [NodeConformance]
E2eNode Suite [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when true [LinuxOnly] [NodeConformance]
E2eNode Suite [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when true [LinuxOnly] [NodeConformance]
E2eNode Suite [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Summary API [NodeConformance] when querying /stats/summary should report resource usage through the stats api
E2eNode Suite [k8s.io] Summary API [NodeConformance] when querying /stats/summary should report resource usage through the stats api
E2eNode Suite [k8s.io] Summary API [NodeConformance] when querying /stats/summary should report resource usage through the stats api
E2eNode Suite [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]
E2eNode Suite [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]
E2eNode Suite [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]
E2eNode Suite [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]
E2eNode Suite [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]
E2eNode Suite [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]
E2eNode Suite [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]
E2eNode Suite [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]
E2eNode Suite [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]
E2eNode Suite [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]
E2eNode Suite [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]
E2eNode Suite [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]
E2eNode Suite [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] HostPath should support r/w [NodeConformance]
E2eNode Suite [sig-storage] HostPath should support r/w [NodeConformance]
E2eNode Suite [sig-storage] HostPath should support r/w [NodeConformance]
E2eNode Suite [sig-storage] HostPath should support subPath [NodeConformance]
E2eNode Suite [sig-storage] HostPath should support subPath [NodeConformance]
E2eNode Suite [sig-storage] HostPath should support subPath [NodeConformance]
E2eNode Suite [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected secret should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance]
E2eNode Suite [sig-storage] Projected secret should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance]
E2eNode Suite [sig-storage] Projected secret should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance]
E2eNode Suite [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
Node Tests
TearDown
TearDown Previous
Timeout
Up
test setup
E2eNode Suite [k8s.io] AppArmor [Feature:AppArmor][NodeFeature:AppArmor] when running with AppArmor should enforce a permissive profile
E2eNode Suite [k8s.io] AppArmor [Feature:AppArmor][NodeFeature:AppArmor] when running with AppArmor should enforce a permissive profile
E2eNode Suite [k8s.io] AppArmor [Feature:AppArmor][NodeFeature:AppArmor] when running with AppArmor should enforce a permissive profile
E2eNode Suite [k8s.io] AppArmor [Feature:AppArmor][NodeFeature:AppArmor] when running with AppArmor should enforce a profile blocking writes
E2eNode Suite [k8s.io] AppArmor [Feature:AppArmor][NodeFeature:AppArmor] when running with AppArmor should enforce a profile blocking writes
E2eNode Suite [k8s.io] AppArmor [Feature:AppArmor][NodeFeature:AppArmor] when running with AppArmor should enforce a profile blocking writes
E2eNode Suite [k8s.io] AppArmor [Feature:AppArmor][NodeFeature:AppArmor] when running with AppArmor should reject an unloaded profile
E2eNode Suite [k8s.io] AppArmor [Feature:AppArmor][NodeFeature:AppArmor] when running with AppArmor should reject an unloaded profile
E2eNode Suite [k8s.io] AppArmor [Feature:AppArmor][NodeFeature:AppArmor] when running with AppArmor should reject an unloaded profile
E2eNode Suite [k8s.io] Container Manager Misc [Serial] Validate OOM score adjustments [NodeFeature:OOMScoreAdj] once the node is setup pod infra containers oom-score-adj should be -998 and best effort container's should be 1000
E2eNode Suite [k8s.io] Container Manager Misc [Serial] Validate OOM score adjustments [NodeFeature:OOMScoreAdj] once the node is setup pod infra containers oom-score-adj should be -998 and best effort container's should be 1000
E2eNode Suite [k8s.io] Container Manager Misc [Serial] Validate OOM score adjustments [NodeFeature:OOMScoreAdj] once the node is setup pod infra containers oom-score-adj should be -998 and best effort container's should be 1000
E2eNode Suite [k8s.io] Container Manager Misc [Serial] Validate OOM score adjustments [NodeFeature:OOMScoreAdj] once the node is setup Kubelet's oom-score-adj should be -999
E2eNode Suite [k8s.io] Container Manager Misc [Serial] Validate OOM score adjustments [NodeFeature:OOMScoreAdj] once the node is setup Kubelet's oom-score-adj should be -999
E2eNode Suite [k8s.io] Container Manager Misc [Serial] Validate OOM score adjustments [NodeFeature:OOMScoreAdj] once the node is setup Kubelet's oom-score-adj should be -999
E2eNode Suite [k8s.io] Container Manager Misc [Serial] Validate OOM score adjustments [NodeFeature:OOMScoreAdj] once the node is setup burstable container's oom-score-adj should be between [2, 1000)
E2eNode Suite [k8s.io] Container Manager Misc [Serial] Validate OOM score adjustments [NodeFeature:OOMScoreAdj] once the node is setup burstable container's oom-score-adj should be between [2, 1000)
E2eNode Suite [k8s.io] Container Manager Misc [Serial] Validate OOM score adjustments [NodeFeature:OOMScoreAdj] once the node is setup burstable container's oom-score-adj should be between [2, 1000)
E2eNode Suite [k8s.io] Container Manager Misc [Serial] Validate OOM score adjustments [NodeFeature:OOMScoreAdj] once the node is setup container runtime's oom-score-adj should be -999
E2eNode Suite [k8s.io] Container Manager Misc [Serial] Validate OOM score adjustments [NodeFeature:OOMScoreAdj] once the node is setup container runtime's oom-score-adj should be -999
E2eNode Suite [k8s.io] Container Manager Misc [Serial] Validate OOM score adjustments [NodeFeature:OOMScoreAdj] once the node is setup container runtime's oom-score-adj should be -999
E2eNode Suite [k8s.io] Container Manager Misc [Serial] Validate OOM score adjustments [NodeFeature:OOMScoreAdj] once the node is setup guaranteed container's oom-score-adj should be -998
E2eNode Suite [k8s.io] Container Manager Misc [Serial] Validate OOM score adjustments [NodeFeature:OOMScoreAdj] once the node is setup guaranteed container's oom-score-adj should be -998
E2eNode Suite [k8s.io] Container Manager Misc [Serial] Validate OOM score adjustments [NodeFeature:OOMScoreAdj] once the node is setup guaranteed container's oom-score-adj should be -998
E2eNode Suite [k8s.io] ContainerLogRotation [Slow] [Serial] [Disruptive] when a container generates a lot of log should be rotated and limited to a fixed amount of files
E2eNode Suite [k8s.io] ContainerLogRotation [Slow] [Serial] [Disruptive] when a container generates a lot of log should be rotated and limited to a fixed amount of files
E2eNode Suite [k8s.io] ContainerLogRotation [Slow] [Serial] [Disruptive] when a container generates a lot of log should be rotated and limited to a fixed amount of files
E2eNode Suite [k8s.io] CriticalPod [Serial] [Disruptive] [NodeFeature:CriticalPod] when we need to admit a critical pod should be able to create and delete a critical pod
E2eNode Suite [k8s.io] CriticalPod [Serial] [Disruptive] [NodeFeature:CriticalPod] when we need to admit a critical pod should be able to create and delete a critical pod
E2eNode Suite [k8s.io] CriticalPod [Serial] [Disruptive] [NodeFeature:CriticalPod] when we need to admit a critical pod should be able to create and delete a critical pod
E2eNode Suite [k8s.io] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 10 pods with 0s interval
E2eNode Suite [k8s.io] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 10 pods with 0s interval
E2eNode Suite [k8s.io] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 10 pods with 0s interval
E2eNode Suite [k8s.io] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 10 pods with 0s interval [Benchmark][NodeSpecialFeature:Benchmark]
E2eNode Suite [k8s.io] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 10 pods with 0s interval [Benchmark][NodeSpecialFeature:Benchmark]
E2eNode Suite [k8s.io] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 10 pods with 0s interval [Benchmark][NodeSpecialFeature:Benchmark]
E2eNode Suite [k8s.io] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 10 pods with 100ms interval [Benchmark][NodeSpecialFeature:Benchmark]
E2eNode Suite [k8s.io] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 10 pods with 100ms interval [Benchmark][NodeSpecialFeature:Benchmark]
E2eNode Suite [k8s.io] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 10 pods with 100ms interval [Benchmark][NodeSpecialFeature:Benchmark]
E2eNode Suite [k8s.io] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 10 pods with 300ms interval [Benchmark][NodeSpecialFeature:Benchmark]
E2eNode Suite [k8s.io] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 10 pods with 300ms interval [Benchmark][NodeSpecialFeature:Benchmark]
E2eNode Suite [k8s.io] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 10 pods with 300ms interval [Benchmark][NodeSpecialFeature:Benchmark]
E2eNode Suite [k8s.io] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 105 pods with 0s interval [Benchmark][NodeSpecialFeature:Benchmark]
E2eNode Suite [k8s.io] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 105 pods with 0s interval [Benchmark][NodeSpecialFeature:Benchmark]
E2eNode Suite [k8s.io] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 105 pods with 0s interval [Benchmark][NodeSpecialFeature:Benchmark]
E2eNode Suite [k8s.io] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 105 pods with 100ms interval [Benchmark][NodeSpecialFeature:Benchmark]
E2eNode Suite [k8s.io] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 105 pods with 100ms interval [Benchmark][NodeSpecialFeature:Benchmark]
E2eNode Suite [k8s.io] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 105 pods with 100ms interval [Benchmark][NodeSpecialFeature:Benchmark]
E2eNode Suite [k8s.io] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 105 pods with 300ms interval [Benchmark][NodeSpecialFeature:Benchmark]
E2eNode Suite [k8s.io] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 105 pods with 300ms interval [Benchmark][NodeSpecialFeature:Benchmark]
E2eNode Suite [k8s.io] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 105 pods with 300ms interval [Benchmark][NodeSpecialFeature:Benchmark]
E2eNode Suite [k8s.io] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 35 pods with 0s interval [Benchmark][NodeSpecialFeature:Benchmark]
E2eNode Suite [k8s.io] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 35 pods with 0s interval [Benchmark][NodeSpecialFeature:Benchmark]
E2eNode Suite [k8s.io] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 35 pods with 0s interval [Benchmark][NodeSpecialFeature:Benchmark]
E2eNode Suite [k8s.io] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 35 pods with 100ms interval [Benchmark][NodeSpecialFeature:Benchmark]
E2eNode Suite [k8s.io] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 35 pods with 100ms interval [Benchmark][NodeSpecialFeature:Benchmark]
E2eNode Suite [k8s.io] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 35 pods with 100ms interval [Benchmark][NodeSpecialFeature:Benchmark]
E2eNode Suite [k8s.io] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 35 pods with 300ms interval [Benchmark][NodeSpecialFeature:Benchmark]
E2eNode Suite [k8s.io] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 35 pods with 300ms interval [Benchmark][NodeSpecialFeature:Benchmark]
E2eNode Suite [k8s.io] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 35 pods with 300ms interval [Benchmark][NodeSpecialFeature:Benchmark]
E2eNode Suite [k8s.io] Density [Serial] [Slow] create a batch of pods with higher API QPS latency/resource should be within limit when create 105 pods with 0s interval (QPS 60) [Benchmark][NodeSpecialFeature:Benchmark]
E2eNode Suite [k8s.io] Density [Serial] [Slow] create a batch of pods with higher API QPS latency/resource should be within limit when create 105 pods with 0s interval (QPS 60) [Benchmark][NodeSpecialFeature:Benchmark]
E2eNode Suite [k8s.io] Density [Serial] [Slow] create a batch of pods with higher API QPS latency/resource should be within limit when create 105 pods with 0s interval (QPS 60) [Benchmark][NodeSpecialFeature:Benchmark]
E2eNode Suite [k8s.io] Density [Serial] [Slow] create a batch of pods with higher API QPS latency/resource should be within limit when create 105 pods with 100ms interval (QPS 60) [Benchmark][NodeSpecialFeature:Benchmark]
E2eNode Suite [k8s.io] Density [Serial] [Slow] create a batch of pods with higher API QPS latency/resource should be within limit when create 105 pods with 100ms interval (QPS 60) [Benchmark][NodeSpecialFeature:Benchmark]
E2eNode Suite [k8s.io] Density [Serial] [Slow] create a batch of pods with higher API QPS latency/resource should be within limit when create 105 pods with 100ms interval (QPS 60) [Benchmark][NodeSpecialFeature:Benchmark]
E2eNode Suite [k8s.io] Density [Serial] [Slow] create a batch of pods with higher API QPS latency/resource should be within limit when create 105 pods with 300ms interval (QPS 60) [Benchmark][NodeSpecialFeature:Benchmark]
E2eNode Suite [k8s.io] Density [Serial] [Slow] create a batch of pods with higher API QPS latency/resource should be within limit when create 105 pods with 300ms interval (QPS 60) [Benchmark][NodeSpecialFeature:Benchmark]
E2eNode Suite [k8s.io] Density [Serial] [Slow] create a batch of pods with higher API QPS latency/resource should be within limit when create 105 pods with 300ms interval (QPS 60) [Benchmark][NodeSpecialFeature:Benchmark]
E2eNode Suite [k8s.io] Density [Serial] [Slow] create a sequence of pods latency/resource should be within limit when create 10 pods with 50 background pods
E2eNode Suite [k8s.io] Density [Serial] [Slow] create a sequence of pods latency/resource should be within limit when create 10 pods with 50 background pods
E2eNode Suite [k8s.io] Density [Serial] [Slow] create a sequence of pods latency/resource should be within limit when create 10 pods with 50 background pods
E2eNode Suite [k8s.io] Density [Serial] [Slow] create a sequence of pods latency/resource should be within limit when create 10 pods with 50 background pods [Benchmark][NodeSpeicalFeature:Benchmark]
E2eNode Suite [k8s.io] Density [Serial] [Slow] create a sequence of pods latency/resource should be within limit when create 10 pods with 50 background pods [Benchmark][NodeSpeicalFeature:Benchmark]
E2eNode Suite [k8s.io] Density [Serial] [Slow] create a sequence of pods latency/resource should be within limit when create 10 pods with 50 background pods [Benchmark][NodeSpeicalFeature:Benchmark]
E2eNode Suite [k8s.io] Density [Serial] [Slow] create a sequence of pods latency/resource should be within limit when create 30 pods with 50 background pods [Benchmark][NodeSpeicalFeature:Benchmark]
E2eNode Suite [k8s.io] Density [Serial] [Slow] create a sequence of pods latency/resource should be within limit when create 30 pods with 50 background pods [Benchmark][NodeSpeicalFeature:Benchmark]
E2eNode Suite [k8s.io] Density [Serial] [Slow] create a sequence of pods latency/resource should be within limit when create 30 pods with 50 background pods [Benchmark][NodeSpeicalFeature:Benchmark]
E2eNode Suite [k8s.io] Density [Serial] [Slow] create a sequence of pods latency/resource should be within limit when create 50 pods with 50 background pods [Benchmark][NodeSpeicalFeature:Benchmark]
E2eNode Suite [k8s.io] Density [Serial] [Slow] create a sequence of pods latency/resource should be within limit when create 50 pods with 50 background pods [Benchmark][NodeSpeicalFeature:Benchmark]
E2eNode Suite [k8s.io] Density [Serial] [Slow] create a sequence of pods latency/resource should be within limit when create 50 pods with 50 background pods [Benchmark][NodeSpeicalFeature:Benchmark]
E2eNode Suite [k8s.io] Device Plugin [Feature:DevicePluginProbe][NodeFeature:DevicePluginProbe][Serial] DevicePlugin Verifies the Kubelet device plugin functionality.
E2eNode Suite [k8s.io] Device Plugin [Feature:DevicePluginProbe][NodeFeature:DevicePluginProbe][Serial] DevicePlugin Verifies the Kubelet device plugin functionality.
E2eNode Suite [k8s.io] Device Plugin [Feature:DevicePluginProbe][NodeFeature:DevicePluginProbe][Serial] DevicePlugin Verifies the Kubelet device plugin functionality.
E2eNode Suite [k8s.io] Docker features [Feature:Docker][Legacy:Docker] when live-restore is enabled [Serial] [Slow] [Disruptive] containers should not be disrupted when the daemon shuts down and restarts
E2eNode Suite [k8s.io] Docker features [Feature:Docker][Legacy:Docker] when live-restore is enabled [Serial] [Slow] [Disruptive] containers should not be disrupted when the daemon shuts down and restarts
E2eNode Suite [k8s.io] Docker features [Feature:Docker][Legacy:Docker] when live-restore is enabled [Serial] [Slow] [Disruptive] containers should not be disrupted when the daemon shuts down and restarts
E2eNode Suite [k8s.io] Downward API [Serial] [Disruptive] [NodeFeature:EphemeralStorage] Downward API tests for local ephemeral storage should provide container's limits.ephemeral-storage and requests.ephemeral-storage as env vars
E2eNode Suite [k8s.io] Downward API [Serial] [Disruptive] [NodeFeature:EphemeralStorage] Downward API tests for local ephemeral storage should provide container's limits.ephemeral-storage and requests.ephemeral-storage as env vars
E2eNode Suite [k8s.io] Downward API [Serial] [Disruptive] [NodeFeature:EphemeralStorage] Downward API tests for local ephemeral storage should provide container's limits.ephemeral-storage and requests.ephemeral-storage as env vars
E2eNode Suite [k8s.io] Downward API [Serial] [Disruptive] [NodeFeature:EphemeralStorage] Downward API tests for local ephemeral storage should provide default limits.ephemeral-storage from node allocatable
E2eNode Suite [k8s.io] Downward API [Serial] [Disruptive] [NodeFeature:EphemeralStorage] Downward API tests for local ephemeral storage should provide default limits.ephemeral-storage from node allocatable
E2eNode Suite [k8s.io] Downward API [Serial] [Disruptive] [NodeFeature:EphemeralStorage] Downward API tests for local ephemeral storage should provide default limits.ephemeral-storage from node allocatable
E2eNode Suite [k8s.io] GKE system requirements [NodeConformance][Feature:GKEEnv][NodeFeature:GKEEnv] The GCR is accessible
E2eNode Suite [k8s.io] GKE system requirements [NodeConformance][Feature:GKEEnv][NodeFeature:GKEEnv] The GCR is accessible
E2eNode Suite [k8s.io] GKE system requirements [NodeConformance][Feature:GKEEnv][NodeFeature:GKEEnv] The GCR is accessible
E2eNode Suite [k8s.io] GKE system requirements [NodeConformance][Feature:GKEEnv][NodeFeature:GKEEnv] The docker configuration validation should pass
E2eNode Suite [k8s.io] GKE system requirements [NodeConformance][Feature:GKEEnv][NodeFeature:GKEEnv] The docker configuration validation should pass
E2eNode Suite [k8s.io] GKE system requirements [NodeConformance][Feature:GKEEnv][NodeFeature:GKEEnv] The docker configuration validation should pass
E2eNode Suite [k8s.io] GKE system requirements [NodeConformance][Feature:GKEEnv][NodeFeature:GKEEnv] The docker container network should work
E2eNode Suite [k8s.io] GKE system requirements [NodeConformance][Feature:GKEEnv][NodeFeature:GKEEnv] The docker container network should work
E2eNode Suite [k8s.io] GKE system requirements [NodeConformance][Feature:GKEEnv][NodeFeature:GKEEnv] The docker container network should work
E2eNode Suite [k8s.io] GKE system requirements [NodeConformance][Feature:GKEEnv][NodeFeature:GKEEnv] The docker daemon should support AppArmor and seccomp
E2eNode Suite [k8s.io] GKE system requirements [NodeConformance][Feature:GKEEnv][NodeFeature:GKEEnv] The docker daemon should support AppArmor and seccomp
E2eNode Suite [k8s.io] GKE system requirements [NodeConformance][Feature:GKEEnv][NodeFeature:GKEEnv] The docker daemon should support AppArmor and seccomp
E2eNode Suite [k8s.io] GKE system requirements [NodeConformance][Feature:GKEEnv][NodeFeature:GKEEnv] The docker storage driver should work
E2eNode Suite [k8s.io] GKE system requirements [NodeConformance][Feature:GKEEnv][NodeFeature:GKEEnv] The docker storage driver should work
E2eNode Suite [k8s.io] GKE system requirements [NodeConformance][Feature:GKEEnv][NodeFeature:GKEEnv] The docker storage driver should work
E2eNode Suite [k8s.io] GKE system requirements [NodeConformance][Feature:GKEEnv][NodeFeature:GKEEnv] The iptable rules should work (required by kube-proxy)
E2eNode Suite [k8s.io] GKE system requirements [NodeConformance][Feature:GKEEnv][NodeFeature:GKEEnv] The iptable rules should work (required by kube-proxy)
E2eNode Suite [k8s.io] GKE system requirements [NodeConformance][Feature:GKEEnv][NodeFeature:GKEEnv] The iptable rules should work (required by kube-proxy)
E2eNode Suite [k8s.io] GKE system requirements [NodeConformance][Feature:GKEEnv][NodeFeature:GKEEnv] The required processes should be running
E2eNode Suite [k8s.io] GKE system requirements [NodeConformance][Feature:GKEEnv][NodeFeature:GKEEnv] The required processes should be running
E2eNode Suite [k8s.io] GKE system requirements [NodeConformance][Feature:GKEEnv][NodeFeature:GKEEnv] The required processes should be running
E2eNode Suite [k8s.io] GarbageCollect [Serial][NodeFeature:GarbageCollect] Garbage Collection Test: Many Pods with Many Restarting Containers Should eventually garbage collect containers when we exceed the number of dead containers per container
E2eNode Suite [k8s.io] GarbageCollect [Serial][NodeFeature:GarbageCollect] Garbage Collection Test: Many Pods with Many Restarting Containers Should eventually garbage collect containers when we exceed the number of dead containers per container
E2eNode Suite [k8s.io] GarbageCollect [Serial][NodeFeature:GarbageCollect] Garbage Collection Test: Many Pods with Many Restarting Containers Should eventually garbage collect containers when we exceed the number of dead containers per container
E2eNode Suite [k8s.io] GarbageCollect [Serial][NodeFeature:GarbageCollect] Garbage Collection Test: Many Restarting Containers Should eventually garbage collect containers when we exceed the number of dead containers per container
E2eNode Suite [k8s.io] GarbageCollect [Serial][NodeFeature:GarbageCollect] Garbage Collection Test: Many Restarting Containers Should eventually garbage collect containers when we exceed the number of dead containers per container
E2eNode Suite [k8s.io] GarbageCollect [Serial][NodeFeature:GarbageCollect] Garbage Collection Test: Many Restarting Containers Should eventually garbage collect containers when we exceed the number of dead containers per container
E2eNode Suite [k8s.io] GarbageCollect [Serial][NodeFeature:GarbageCollect] Garbage Collection Test: One Non-restarting Container Should eventually garbage collect containers when we exceed the number of dead containers per container
E2eNode Suite [k8s.io] GarbageCollect [Serial][NodeFeature:GarbageCollect] Garbage Collection Test: One Non-restarting Container Should eventually garbage collect containers when we exceed the number of dead containers per container
E2eNode Suite [k8s.io] GarbageCollect [Serial][NodeFeature:GarbageCollect] Garbage Collection Test: One Non-restarting Container Should eventually garbage collect containers when we exceed the number of dead containers per container
E2eNode Suite [k8s.io] ImageGCNoEviction [Slow] [Serial] [Disruptive][NodeFeature:Eviction] when we run containers that should cause DiskPressure should eventually evict all of the correct pods
E2eNode Suite [k8s.io] ImageGCNoEviction [Slow] [Serial] [Disruptive][NodeFeature:Eviction] when we run containers that should cause DiskPressure should eventually evict all of the correct pods
E2eNode Suite [k8s.io] ImageGCNoEviction [Slow] [Serial] [Disruptive][NodeFeature:Eviction] when we run containers that should cause DiskPressure should eventually evict all of the correct pods
E2eNode Suite [k8s.io] ImageID [NodeFeature: ImageID] should be set to the manifest digest (from RepoDigests) when available
E2eNode Suite [k8s.io] ImageID [NodeFeature: ImageID] should be set to the manifest digest (from RepoDigests) when available
E2eNode Suite [k8s.io] ImageID [NodeFeature: ImageID] should be set to the manifest digest (from RepoDigests) when available
E2eNode Suite [k8s.io] InodeEviction [Slow] [Serial] [Disruptive][NodeFeature:Eviction] when we run containers that should cause DiskPressure should eventually evict all of the correct pods
E2eNode Suite [k8s.io] InodeEviction [Slow] [Serial] [Disruptive][NodeFeature:Eviction] when we run containers that should cause DiskPressure should eventually evict all of the correct pods
E2eNode Suite [k8s.io] InodeEviction [Slow] [Serial] [Disruptive][NodeFeature:Eviction] when we run containers that should cause DiskPressure should eventually evict all of the correct pods
E2eNode Suite [k8s.io] Lease lease API should be available [Conformance]
E2eNode Suite [k8s.io] Lease lease API should be available [Conformance]
E2eNode Suite [k8s.io] Lease lease API should be available [Conformance]
E2eNode Suite [k8s.io] LocalStorageCapacityIsolationEviction [Slow] [Serial] [Disruptive] [Feature:LocalStorageCapacityIsolation][NodeFeature:Eviction] when we run containers that should cause evictions due to pod local storage violations should eventually evict all of the correct pods
E2eNode Suite [k8s.io] LocalStorageCapacityIsolationEviction [Slow] [Serial] [Disruptive] [Feature:LocalStorageCapacityIsolation][NodeFeature:Eviction] when we run containers that should cause evictions due to pod local storage violations should eventually evict all of the correct pods
E2eNode Suite [k8s.io] LocalStorageCapacityIsolationEviction [Slow] [Serial] [Disruptive] [Feature:LocalStorageCapacityIsolation][NodeFeature:Eviction] when we run containers that should cause evictions due to pod local storage violations should eventually evict all of the correct pods
E2eNode Suite [k8s.io] LocalStorageCapacityIsolationQuotaMonitoring [Slow] [Serial] [Disruptive] [Feature:LocalStorageCapacityIsolationQuota][NodeFeature:LSCIQuotaMonitoring] when we run containers that should cause use quotas for LSCI monitoring (quotas enabled: false) should eventually evict all of the correct pods
E2eNode Suite [k8s.io] LocalStorageCapacityIsolationQuotaMonitoring [Slow] [Serial] [Disruptive] [Feature:LocalStorageCapacityIsolationQuota][NodeFeature:LSCIQuotaMonitoring] when we run containers that should cause use quotas for LSCI monitoring (quotas enabled: false) should eventually evict all of the correct pods
E2eNode Suite [k8s.io] LocalStorageCapacityIsolationQuotaMonitoring [Slow] [Serial] [Disruptive] [Feature:LocalStorageCapacityIsolationQuota][NodeFeature:LSCIQuotaMonitoring] when we run containers that should cause use quotas for LSCI monitoring (quotas enabled: false) should eventually evict all of the correct pods
E2eNode Suite [k8s.io] LocalStorageCapacityIsolationQuotaMonitoring [Slow] [Serial] [Disruptive] [Feature:LocalStorageCapacityIsolationQuota][NodeFeature:LSCIQuotaMonitoring] when we run containers that should cause use quotas for LSCI monitoring (quotas enabled: true) should eventually evict all of the correct pods
E2eNode Suite [k8s.io] LocalStorageCapacityIsolationQuotaMonitoring [Slow] [Serial] [Disruptive] [Feature:LocalStorageCapacityIsolationQuota][NodeFeature:LSCIQuotaMonitoring] when we run containers that should cause use quotas for LSCI monitoring (quotas enabled: true) should eventually evict all of the correct pods
E2eNode Suite [k8s.io] LocalStorageCapacityIsolationQuotaMonitoring [Slow] [Serial] [Disruptive] [Feature:LocalStorageCapacityIsolationQuota][NodeFeature:LSCIQuotaMonitoring] when we run containers that should cause use quotas for LSCI monitoring (quotas enabled: true) should eventually evict all of the correct pods
E2eNode Suite [k8s.io] LocalStorageEviction [Slow] [Serial] [Disruptive][NodeFeature:Eviction] when we run containers that should cause DiskPressure should eventually evict all of the correct pods
E2eNode Suite [k8s.io] LocalStorageEviction [Slow] [Serial] [Disruptive][NodeFeature:Eviction] when we run containers that should cause DiskPressure should eventually evict all of the correct pods
E2eNode Suite [k8s.io] LocalStorageEviction [Slow] [Serial] [Disruptive][NodeFeature:Eviction] when we run containers that should cause DiskPressure should eventually evict all of the correct pods
E2eNode Suite [k8s.io] LocalStorageSoftEviction [Slow] [Serial] [Disruptive][NodeFeature:Eviction] when we run containers that should cause DiskPressure should eventually evict all of the correct pods
E2eNode Suite [k8s.io] LocalStorageSoftEviction [Slow] [Serial] [Disruptive][NodeFeature:Eviction] when we run containers that should cause DiskPressure should eventually evict all of the correct pods
E2eNode Suite [k8s.io] LocalStorageSoftEviction [Slow] [Serial] [Disruptive][NodeFeature:Eviction] when we run containers that should cause DiskPressure should eventually evict all of the correct pods
E2eNode Suite [k8s.io] MemoryAllocatableEviction [Slow] [Serial] [Disruptive][NodeFeature:Eviction] when we run containers that should cause MemoryPressure should eventually evict all of the correct pods
E2eNode Suite [k8s.io] MemoryAllocatableEviction [Slow] [Serial] [Disruptive][NodeFeature:Eviction] when we run containers that should cause MemoryPressure should eventually evict all of the correct pods
E2eNode Suite [k8s.io] MemoryAllocatableEviction [Slow] [Serial] [Disruptive][NodeFeature:Eviction] when we run containers that should cause MemoryPressure should eventually evict all of the correct pods
E2eNode Suite [k8s.io] NVIDIA GPU Device Plugin [Feature:GPUDevicePlugin][NodeFeature:GPUDevicePlugin][Serial] [Disruptive] DevicePlugin checks that when Kubelet restarts exclusive GPU assignation to pods is kept.
E2eNode Suite [k8s.io] NVIDIA GPU Device Plugin [Feature:GPUDevicePlugin][NodeFeature:GPUDevicePlugin][Serial] [Disruptive] DevicePlugin checks that when Kubelet restarts exclusive GPU assignation to pods is kept.
E2eNode Suite [k8s.io] NVIDIA GPU Device Plugin [Feature:GPUDevicePlugin][NodeFeature:GPUDevicePlugin][Serial] [Disruptive] DevicePlugin checks that when Kubelet restarts exclusive GPU assignation to pods is kept.
E2eNode Suite [k8s.io] Node Container Manager [Serial] Validate Node Allocatable [NodeFeature:NodeAllocatable] sets up the node and runs the test
E2eNode Suite [k8s.io] Node Container Manager [Serial] Validate Node Allocatable [NodeFeature:NodeAllocatable] sets up the node and runs the test
E2eNode Suite [k8s.io] Node Container Manager [Serial] Validate Node Allocatable [NodeFeature:NodeAllocatable] sets up the node and runs the test
E2eNode Suite [k8s.io] NodeLease when the NodeLease feature is enabled should have OwnerReferences set
E2eNode Suite [k8s.io] NodeLease when the NodeLease feature is enabled should have OwnerReferences set
E2eNode Suite [k8s.io] NodeLease when the NodeLease feature is enabled should have OwnerReferences set
E2eNode Suite [k8s.io] NodeLease when the NodeLease feature is enabled the kubelet should create and update a lease in the kube-node-lease namespace
E2eNode Suite [k8s.io] NodeLease when the NodeLease feature is enabled the kubelet should create and update a lease in the kube-node-lease namespace
E2eNode Suite [k8s.io] NodeLease when the NodeLease feature is enabled the kubelet should create and update a lease in the kube-node-lease namespace
E2eNode Suite [k8s.io] NodeLease when the NodeLease feature is enabled the kubelet should report node status infrequently
E2eNode Suite [k8s.io] NodeLease when the NodeLease feature is enabled the kubelet should report node status infrequently
E2eNode Suite [k8s.io] NodeLease when the NodeLease feature is enabled the kubelet should report node status infrequently
E2eNode Suite [k8s.io] NodeProblemDetector [NodeFeature:NodeProblemDetector] [k8s.io] SystemLogMonitor should generate node condition and events for corresponding errors
E2eNode Suite [k8s.io] NodeProblemDetector [NodeFeature:NodeProblemDetector] [k8s.io] SystemLogMonitor should generate node condition and events for corresponding errors
E2eNode Suite [k8s.io] NodeProblemDetector [NodeFeature:NodeProblemDetector] [k8s.io] SystemLogMonitor should generate node condition and events for corresponding errors
E2eNode Suite [k8s.io] Pods should cap back-off at MaxContainerBackOff [Slow][NodeConformance]
E2eNode Suite [k8s.io] Pods should cap back-off at MaxContainerBackOff [Slow][NodeConformance]
E2eNode Suite [k8s.io] Pods should cap back-off at MaxContainerBackOff [Slow][NodeConformance]
E2eNode Suite [k8s.io] Pods should have their auto-restart back-off timer reset on image update [Slow][NodeConformance]
E2eNode Suite [k8s.io] Pods should have their auto-restart back-off timer reset on image update [Slow][NodeConformance]
E2eNode Suite [k8s.io] Pods should have their auto-restart back-off timer reset on image update [Slow][NodeConformance]
E2eNode Suite [k8s.io] Pods should support pod readiness gates [NodeFeature:PodReadinessGate]
E2eNode Suite [k8s.io] Pods should support pod readiness gates [NodeFeature:PodReadinessGate]
E2eNode Suite [k8s.io] Pods should support pod readiness gates [NodeFeature:PodReadinessGate]
E2eNode Suite [k8s.io] PriorityLocalStorageEvictionOrdering [Slow] [Serial] [Disruptive][NodeFeature:Eviction] when we run containers that should cause DiskPressure should eventually evict all of the correct pods
E2eNode Suite [k8s.io] PriorityLocalStorageEvictionOrdering [Slow] [Serial] [Disruptive][NodeFeature:Eviction] when we run containers that should cause DiskPressure should eventually evict all of the correct pods
E2eNode Suite [k8s.io] PriorityLocalStorageEvictionOrdering [Slow] [Serial] [Disruptive][NodeFeature:Eviction] when we run containers that should cause DiskPressure should eventually evict all of the correct pods
E2eNode Suite [k8s.io] PriorityMemoryEvictionOrdering [Slow] [Serial] [Disruptive][NodeFeature:Eviction] when we run containers that should cause MemoryPressure should eventually evict all of the correct pods
E2eNode Suite [k8s.io] PriorityMemoryEvictionOrdering [Slow] [Serial] [Disruptive][NodeFeature:Eviction] when we run containers that should cause MemoryPressure should eventually evict all of the correct pods
E2eNode Suite [k8s.io] PriorityMemoryEvictionOrdering [Slow] [Serial] [Disruptive][NodeFeature:Eviction] when we run containers that should cause MemoryPressure should eventually evict all of the correct pods
E2eNode Suite [k8s.io] PriorityPidEvictionOrdering [Slow] [Serial] [Disruptive][NodeFeature:Eviction] when we run containers that should cause PIDPressure should eventually evict all of the correct pods
E2eNode Suite [k8s.io] PriorityPidEvictionOrdering [Slow] [Serial] [Disruptive][NodeFeature:Eviction] when we run containers that should cause PIDPressure should eventually evict all of the correct pods
E2eNode Suite [k8s.io] PriorityPidEvictionOrdering [Slow] [Serial] [Disruptive][NodeFeature:Eviction] when we run containers that should cause PIDPressure should eventually evict all of the correct pods
E2eNode Suite [k8s.io] Probing container should *not* be restarted with a non-local redirect http liveness probe
E2eNode Suite [k8s.io] Probing container should *not* be restarted with a non-local redirect http liveness probe
E2eNode Suite [k8s.io] Probing container should *not* be restarted with a non-local redirect http liveness probe
E2eNode Suite [k8s.io] Probing container should be restarted with a docker exec liveness probe with timeout
E2eNode Suite [k8s.io] Probing container should be restarted with a docker exec liveness probe with timeout
E2eNode Suite [k8s.io] Probing container should be restarted with a docker exec liveness probe with timeout
E2eNode Suite [k8s.io] Probing container should be restarted with a local redirect http liveness probe
E2eNode Suite [k8s.io] Probing container should be restarted with a local redirect http liveness probe
E2eNode Suite [k8s.io] Probing container should be restarted with a local redirect http liveness probe
E2eNode Suite [k8s.io] ResourceMetricsAPI when querying /resource/metrics should report resource usage through the v1alpha1 resouce metrics api
E2eNode Suite [k8s.io] ResourceMetricsAPI when querying /resource/metrics should report resource usage through the v1alpha1 resouce metrics api
E2eNode Suite [k8s.io] ResourceMetricsAPI when querying /resource/metrics should report resource usage through the v1alpha1 resouce metrics api
E2eNode Suite [k8s.io] Restart [Serial] [Slow] [Disruptive] [NodeFeature:ContainerRuntimeRestart] Container Runtime Network should recover from ip leak
E2eNode Suite [k8s.io] Restart [Serial] [Slow] [Disruptive] [NodeFeature:ContainerRuntimeRestart] Container Runtime Network should recover from ip leak
E2eNode Suite [k8s.io] Restart [Serial] [Slow] [Disruptive] [NodeFeature:ContainerRuntimeRestart] Container Runtime Network should recover from ip leak
E2eNode Suite [k8s.io] Security Context When creating a container with runAsNonRoot should not run with an explicit root user ID [LinuxOnly]
E2eNode Suite [k8s.io] Security Context When creating a container with runAsNonRoot should not run with an explicit root user ID [LinuxOnly]
E2eNode Suite [k8s.io] Security Context When creating a container with runAsNonRoot should not run with an explicit root user ID [LinuxOnly]
E2eNode Suite [k8s.io] Security Context When creating a container with runAsNonRoot should not run without a specified user ID
E2eNode Suite [k8s.io] Security Context When creating a container with runAsNonRoot should not run without a specified user ID
E2eNode Suite [k8s.io] Security Context When creating a container with runAsNonRoot should not run without a specified user ID
E2eNode Suite [k8s.io] Security Context When creating a container with runAsNonRoot should run with an explicit non-root user ID [LinuxOnly]
E2eNode Suite [k8s.io] Security Context When creating a container with runAsNonRoot should run with an explicit non-root user ID [LinuxOnly]
E2eNode Suite [k8s.io] Security Context When creating a container with runAsNonRoot should run with an explicit non-root user ID [LinuxOnly]
E2eNode Suite [k8s.io] Security Context When creating a container with runAsNonRoot should run with an image specified user ID
E2eNode Suite [k8s.io] Security Context When creating a container with runAsNonRoot should run with an image specified user ID
E2eNode Suite [k8s.io] Security Context When creating a container with runAsNonRoot should run with an image specified user ID
E2eNode Suite [k8s.io] Security Context When creating a pod with privileged should run the container as privileged when true [LinuxOnly] [NodeFeature:HostAccess]
E2eNode Suite [k8s.io] Security Context When creating a pod with privileged should run the container as privileged when true [LinuxOnly] [NodeFeature:HostAccess]
E2eNode Suite [k8s.io] Security Context When creating a pod with privileged should run the container as privileged when true [LinuxOnly] [NodeFeature:HostAccess]
E2eNode Suite [k8s.io] Security Context when creating a pod in the host IPC namespace should not show the shared memory ID in the non-hostIPC containers [NodeFeature:HostAccess]
E2eNode Suite [k8s.io] Security Context when creating a pod in the host IPC namespace should not show the shared memory ID in the non-hostIPC containers [NodeFeature:HostAccess]
E2eNode Suite [k8s.io] Security Context when creating a pod in the host IPC namespace should not show the shared memory ID in the non-hostIPC containers [NodeFeature:HostAccess]
E2eNode Suite [k8s.io] Security Context when creating a pod in the host IPC namespace should show the shared memory ID in the host IPC containers [NodeFeature:HostAccess]
E2eNode Suite [k8s.io] Security Context when creating a pod in the host IPC namespace should show the shared memory ID in the host IPC containers [NodeFeature:HostAccess]
E2eNode Suite [k8s.io] Security Context when creating a pod in the host IPC namespace should show the shared memory ID in the host IPC containers [NodeFeature:HostAccess]
E2eNode Suite [k8s.io] Security Context when creating a pod in the host PID namespace should not show its pid in the non-hostpid containers [NodeFeature:HostAccess]
E2eNode Suite [k8s.io] Security Context when creating a pod in the host PID namespace should not show its pid in the non-hostpid containers [NodeFeature:HostAccess]
E2eNode Suite [k8s.io] Security Context when creating a pod in the host PID namespace should not show its pid in the non-hostpid containers [NodeFeature:HostAccess]
E2eNode Suite [k8s.io] Security Context when creating a pod in the host PID namespace should show its pid in the host PID namespace [NodeFeature:HostAccess]
E2eNode Suite [k8s.io] Security Context when creating a pod in the host PID namespace should show its pid in the host PID namespace [NodeFeature:HostAccess]
E2eNode Suite [k8s.io] Security Context when creating a pod in the host PID namespace should show its pid in the host PID namespace [NodeFeature:HostAccess]
E2eNode Suite [k8s.io] Security Context when creating a pod in the host network namespace should listen on same port in the host network containers [NodeFeature:HostAccess]
E2eNode Suite [k8s.io] Security Context when creating a pod in the host network namespace should listen on same port in the host network containers [NodeFeature:HostAccess]
E2eNode Suite [k8s.io] Security Context when creating a pod in the host network namespace should listen on same port in the host network containers [NodeFeature:HostAccess]
E2eNode Suite [k8s.io] Security Context when creating a pod in the host network namespace shouldn't show the same port in the non-hostnetwork containers [NodeFeature:HostAccess]
E2eNode Suite [k8s.io] Security Context when creating a pod in the host network namespace shouldn't show the same port in the non-hostnetwork containers [NodeFeature:HostAccess]
E2eNode Suite [k8s.io] Security Context when creating a pod in the host network namespace shouldn't show the same port in the non-hostnetwork containers [NodeFeature:HostAccess]
E2eNode Suite [k8s.io] Security Context when pod PID namespace is configurable [Feature:ShareProcessNamespace][NodeAlphaFeature:ShareProcessNamespace] containers in pods using isolated PID namespaces should all receive PID 1
E2eNode Suite [k8s.io] Security Context when pod PID namespace is configurable [Feature:ShareProcessNamespace][NodeAlphaFeature:ShareProcessNamespace] containers in pods using isolated PID namespaces should all receive PID 1
E2eNode Suite [k8s.io] Security Context when pod PID namespace is configurable [Feature:ShareProcessNamespace][NodeAlphaFeature:ShareProcessNamespace] containers in pods using isolated PID namespaces should all receive PID 1
E2eNode Suite [k8s.io] Security Context when pod PID namespace is configurable [Feature:ShareProcessNamespace][NodeAlphaFeature:ShareProcessNamespace] processes in containers sharing a pod namespace should be able to see each other [Alpha]
E2eNode Suite [k8s.io] Security Context when pod PID namespace is configurable [Feature:ShareProcessNamespace][NodeAlphaFeature:ShareProcessNamespace] processes in containers sharing a pod namespace should be able to see each other [Alpha]
E2eNode Suite [k8s.io] Security Context when pod PID namespace is configurable [Feature:ShareProcessNamespace][NodeAlphaFeature:ShareProcessNamespace] processes in containers sharing a pod namespace should be able to see each other [Alpha]
E2eNode Suite [k8s.io] StartupProbe [Serial] [Disruptive] [NodeAlphaFeature:StartupProbe] when a container has a startup probe should *not* be restarted by liveness probe because startup probe delays it [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] StartupProbe [Serial] [Disruptive] [NodeAlphaFeature:StartupProbe] when a container has a startup probe should *not* be restarted by liveness probe because startup probe delays it [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] StartupProbe [Serial] [Disruptive] [NodeAlphaFeature:StartupProbe] when a container has a startup probe should *not* be restarted by liveness probe because startup probe delays it [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] StartupProbe [Serial] [Disruptive] [NodeAlphaFeature:StartupProbe] when a container has a startup probe should be restarted by liveness probe after startup probe enables it [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] StartupProbe [Serial] [Disruptive] [NodeAlphaFeature:StartupProbe] when a container has a startup probe should be restarted by liveness probe after startup probe enables it [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] StartupProbe [Serial] [Disruptive] [NodeAlphaFeature:StartupProbe] when a container has a startup probe should be restarted by liveness probe after startup probe enables it [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] StartupProbe [Serial] [Disruptive] [NodeAlphaFeature:StartupProbe] when a container has a startup probe should be restarted startup probe fails [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] StartupProbe [Serial] [Disruptive] [NodeAlphaFeature:StartupProbe] when a container has a startup probe should be restarted startup probe fails [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] StartupProbe [Serial] [Disruptive] [NodeAlphaFeature:StartupProbe] when a container has a startup probe should be restarted startup probe fails [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] StartupProbe [Serial] [Disruptive] [NodeAlphaFeature:StartupProbe] when a container has a startup probe should not be ready until startupProbe succeeds [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] StartupProbe [Serial] [Disruptive] [NodeAlphaFeature:StartupProbe] when a container has a startup probe should not be ready until startupProbe succeeds [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] StartupProbe [Serial] [Disruptive] [NodeAlphaFeature:StartupProbe] when a container has a startup probe should not be ready until startupProbe succeeds [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should not launch unsafe, but not explicitly enabled sysctls on the node
E2eNode Suite [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should not launch unsafe, but not explicitly enabled sysctls on the node
E2eNode Suite [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should not launch unsafe, but not explicitly enabled sysctls on the node
E2eNode Suite [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should reject invalid sysctls
E2eNode Suite [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should reject invalid sysctls
E2eNode Suite [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should reject invalid sysctls
E2eNode Suite [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should support sysctls
E2eNode Suite [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should support sysctls
E2eNode Suite [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should support sysctls
E2eNode Suite [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should support unsafe sysctls which are actually whitelisted
E2eNode Suite [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should support unsafe sysctls which are actually whitelisted
E2eNode Suite [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should support unsafe sysctls which are actually whitelisted
E2eNode Suite [k8s.io] SystemNodeCriticalPod [Slow] [Serial] [Disruptive] [NodeFeature:SystemNodeCriticalPod] when create a system-node-critical pod should not be evicted upon DiskPressure
E2eNode Suite [k8s.io] SystemNodeCriticalPod [Slow] [Serial] [Disruptive] [NodeFeature:SystemNodeCriticalPod] when create a system-node-critical pod should not be evicted upon DiskPressure
E2eNode Suite [k8s.io] SystemNodeCriticalPod [Slow] [Serial] [Disruptive] [NodeFeature:SystemNodeCriticalPod] when create a system-node-critical pod should not be evicted upon DiskPressure
E2eNode Suite [k8s.io] Variable Expansion should allow substituting values in a volume subpath [sig-storage]
E2eNode Suite [k8s.io] Variable Expansion should allow substituting values in a volume subpath [sig-storage]
E2eNode Suite [k8s.io] Variable Expansion should allow substituting values in a volume subpath [sig-storage]
E2eNode Suite [k8s.io] Variable Expansion should fail substituting values in a volume subpath with absolute path [sig-storage][Slow]
E2eNode Suite [k8s.io] Variable Expansion should fail substituting values in a volume subpath with absolute path [sig-storage][Slow]
E2eNode Suite [k8s.io] Variable Expansion should fail substituting values in a volume subpath with absolute path [sig-storage][Slow]
E2eNode Suite [k8s.io] Variable Expansion should fail substituting values in a volume subpath with backticks [sig-storage][Slow]
E2eNode Suite [k8s.io] Variable Expansion should fail substituting values in a volume subpath with backticks [sig-storage][Slow]
E2eNode Suite [k8s.io] Variable Expansion should fail substituting values in a volume subpath with backticks [sig-storage][Slow]
E2eNode Suite [k8s.io] Variable Expansion should not change the subpath mount on a container restart if the environment variable changes [sig-storage][Slow]
E2eNode Suite [k8s.io] Variable Expansion should not change the subpath mount on a container restart if the environment variable changes [sig-storage][Slow]
E2eNode Suite [k8s.io] Variable Expansion should not change the subpath mount on a container restart if the environment variable changes [sig-storage][Slow]
E2eNode Suite [k8s.io] Variable Expansion should succeed in writing subpaths in container [sig-storage][Slow]
E2eNode Suite [k8s.io] Variable Expansion should succeed in writing subpaths in container [sig-storage][Slow]
E2eNode Suite [k8s.io] Variable Expansion should succeed in writing subpaths in container [sig-storage][Slow]
E2eNode Suite [k8s.io] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow]
E2eNode Suite [k8s.io] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow]
E2eNode Suite [k8s.io] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow]
E2eNode Suite [k8s.io] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive] delete and recreate ConfigMap: error while ConfigMap is absent: status and events should match expectations
E2eNode Suite [k8s.io] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive] delete and recreate ConfigMap: error while ConfigMap is absent: status and events should match expectations
E2eNode Suite [k8s.io] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive] delete and recreate ConfigMap: error while ConfigMap is absent: status and events should match expectations
E2eNode Suite [k8s.io] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive] delete and recreate ConfigMap: state transitions: status and events should match expectations
E2eNode Suite [k8s.io] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive] delete and recreate ConfigMap: state transitions: status and events should match expectations
E2eNode Suite [k8s.io] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive] delete and recreate ConfigMap: state transitions: status and events should match expectations
E2eNode Suite [k8s.io] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive] update ConfigMap in-place: recover to last-known-good version: status and events should match expectations
E2eNode Suite [k8s.io] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive] update ConfigMap in-place: recover to last-known-good version: status and events should match expectations
E2eNode Suite [k8s.io] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive] update ConfigMap in-place: recover to last-known-good version: status and events should match expectations
E2eNode Suite [k8s.io] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive] update ConfigMap in-place: state transitions: status and events should match expectations
E2eNode Suite [k8s.io] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive] update ConfigMap in-place: state transitions: status and events should match expectations
E2eNode Suite [k8s.io] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive] update ConfigMap in-place: state transitions: status and events should match expectations
E2eNode Suite [k8s.io] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive] update Node.Spec.ConfigSource: 100 update stress test: status and events should match expectations
E2eNode Suite [k8s.io] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive] update Node.Spec.ConfigSource: 100 update stress test: status and events should match expectations
E2eNode Suite [k8s.io] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive] update Node.Spec.ConfigSource: 100 update stress test: status and events should match expectations
E2eNode Suite [k8s.io] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive] update Node.Spec.ConfigSource: non-nil last-known-good to a new non-nil last-known-good status and events should match expectations
E2eNode Suite [k8s.io] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive] update Node.Spec.ConfigSource: non-nil last-known-good to a new non-nil last-known-good status and events should match expectations
E2eNode Suite [k8s.io] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive] update Node.Spec.ConfigSource: non-nil last-known-good to a new non-nil last-known-good status and events should match expectations
E2eNode Suite [k8s.io] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive] update Node.Spec.ConfigSource: recover to last-known-good ConfigMap.KubeletConfigKey: status and events should match expectations
E2eNode Suite [k8s.io] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive] update Node.Spec.ConfigSource: recover to last-known-good ConfigMap.KubeletConfigKey: status and events should match expectations
E2eNode Suite [k8s.io] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive] update Node.Spec.ConfigSource: recover to last-known-good ConfigMap.KubeletConfigKey: status and events should match expectations
E2eNode Suite [k8s.io] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive] update Node.Spec.ConfigSource: recover to last-known-good ConfigMap: status and events should match expectations
E2eNode Suite [k8s.io] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive] update Node.Spec.ConfigSource: recover to last-known-good ConfigMap: status and events should match expectations
E2eNode Suite [k8s.io] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive] update Node.Spec.ConfigSource: recover to last-known-good ConfigMap: status and events should match expectations
E2eNode Suite [k8s.io] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive] update Node.Spec.ConfigSource: state transitions: status and events should match expectations
E2eNode Suite [k8s.io] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive] update Node.Spec.ConfigSource: state transitions: status and events should match expectations
E2eNode Suite [k8s.io] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive] update Node.Spec.ConfigSource: state transitions: status and events should match expectations
E2eNode Suite [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]
E2eNode Suite [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]
E2eNode Suite [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]
E2eNode Suite [sig-node] CPU Manager [Serial] [Feature:CPUManager][NodeAlphaFeature:CPUManager] With kubeconfig updated with static CPU Manager policy run the CPU Manager tests should assign CPUs as expected based on the Pod spec
E2eNode Suite [sig-node] CPU Manager [Serial] [Feature:CPUManager][NodeAlphaFeature:CPUManager] With kubeconfig updated with static CPU Manager policy run the CPU Manager tests should assign CPUs as expected based on the Pod spec
E2eNode Suite [sig-node] CPU Manager [Serial] [Feature:CPUManager][NodeAlphaFeature:CPUManager] With kubeconfig updated with static CPU Manager policy run the CPU Manager tests should assign CPUs as expected based on the Pod spec
E2eNode Suite [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]
E2eNode Suite [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]
E2eNode Suite [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]
E2eNode Suite [sig-node] ConfigMap should update ConfigMap successfully
E2eNode Suite [sig-node] ConfigMap should update ConfigMap successfully
E2eNode Suite [sig-node] ConfigMap should update ConfigMap successfully
E2eNode Suite [sig-node] Dockershim [Serial] [Disruptive] [Feature:Docker][Legacy:Docker] When all containers in pod are missing should complete pod sandbox clean up based on the information in sandbox checkpoint
E2eNode Suite [sig-node] Dockershim [Serial] [Disruptive] [Feature:Docker][Legacy:Docker] When all containers in pod are missing should complete pod sandbox clean up based on the information in sandbox checkpoint
E2eNode Suite [sig-node] Dockershim [Serial] [Disruptive] [Feature:Docker][Legacy:Docker] When all containers in pod are missing should complete pod sandbox clean up based on the information in sandbox checkpoint
E2eNode Suite [sig-node] Dockershim [Serial] [Disruptive] [Feature:Docker][Legacy:Docker] When checkpoint file is corrupted should complete pod sandbox clean up
E2eNode Suite [sig-node] Dockershim [Serial] [Disruptive] [Feature:Docker][Legacy:Docker] When checkpoint file is corrupted should complete pod sandbox clean up
E2eNode Suite [sig-node] Dockershim [Serial] [Disruptive] [Feature:Docker][Legacy:Docker] When checkpoint file is corrupted should complete pod sandbox clean up
E2eNode Suite [sig-node] Dockershim [Serial] [Disruptive] [Feature:Docker][Legacy:Docker] When pod sandbox checkpoint is missing should complete pod sandbox clean up
E2eNode Suite [sig-node] Dockershim [Serial] [Disruptive] [Feature:Docker][Legacy:Docker] When pod sandbox checkpoint is missing should complete pod sandbox clean up
E2eNode Suite [sig-node] Dockershim [Serial] [Disruptive] [Feature:Docker][Legacy:Docker] When pod sandbox checkpoint is missing should complete pod sandbox clean up
E2eNode Suite [sig-node] Dockershim [Serial] [Disruptive] [Feature:Docker][Legacy:Docker] should clean up pod sandbox checkpoint after pod deletion
E2eNode Suite [sig-node] Dockershim [Serial] [Disruptive] [Feature:Docker][Legacy:Docker] should clean up pod sandbox checkpoint after pod deletion
E2eNode Suite [sig-node] Dockershim [Serial] [Disruptive] [Feature:Docker][Legacy:Docker] should clean up pod sandbox checkpoint after pod deletion
E2eNode Suite [sig-node] Dockershim [Serial] [Disruptive] [Feature:Docker][Legacy:Docker] should remove dangling checkpoint file
E2eNode Suite [sig-node] Dockershim [Serial] [Disruptive] [Feature:Docker][Legacy:Docker] should remove dangling checkpoint file
E2eNode Suite [sig-node] Dockershim [Serial] [Disruptive] [Feature:Docker][Legacy:Docker] should remove dangling checkpoint file
E2eNode Suite [sig-node] Downward API should provide host IP and pod IP as an env var if pod uses host network [LinuxOnly]
E2eNode Suite [sig-node] Downward API should provide host IP and pod IP as an env var if pod uses host network [LinuxOnly]
E2eNode Suite [sig-node] Downward API should provide host IP and pod IP as an env var if pod uses host network [LinuxOnly]
E2eNode Suite [sig-node] HugePages [Serial] [Feature:HugePages][NodeFeature:HugePages] With config updated with hugepages feature enabled should assign hugepages as expected based on the Pod spec
E2eNode Suite [sig-node] HugePages [Serial] [Feature:HugePages][NodeFeature:HugePages] With config updated with hugepages feature enabled should assign hugepages as expected based on the Pod spec
E2eNode Suite [sig-node] HugePages [Serial] [Feature:HugePages][NodeFeature:HugePages] With config updated with hugepages feature enabled should assign hugepages as expected based on the Pod spec
E2eNode Suite [sig-node] Node Performance Testing [Serial] [Slow] [Flaky] Run node performance testing with pre-defined workloads NAS parallel benchmark (NPB) suite - Embarrassingly Parallel (EP) workload
E2eNode Suite [sig-node] Node Performance Testing [Serial] [Slow] [Flaky] Run node performance testing with pre-defined workloads NAS parallel benchmark (NPB) suite - Embarrassingly Parallel (EP) workload
E2eNode Suite [sig-node] Node Performance Testing [Serial] [Slow] [Flaky] Run node performance testing with pre-defined workloads NAS parallel benchmark (NPB) suite - Embarrassingly Parallel (EP) workload
E2eNode Suite [sig-node] Node Performance Testing [Serial] [Slow] [Flaky] Run node performance testing with pre-defined workloads NAS parallel benchmark (NPB) suite - Integer Sort (IS) workload
E2eNode Suite [sig-node] Node Performance Testing [Serial] [Slow] [Flaky] Run node performance testing with pre-defined workloads NAS parallel benchmark (NPB) suite - Integer Sort (IS) workload
E2eNode Suite [sig-node] Node Performance Testing [Serial] [Slow] [Flaky] Run node performance testing with pre-defined workloads NAS parallel benchmark (NPB) suite - Integer Sort (IS) workload
E2eNode Suite [sig-node] Node Performance Testing [Serial] [Slow] [Flaky] Run node performance testing with pre-defined workloads TensorFlow workload
E2eNode Suite [sig-node] Node Performance Testing [Serial] [Slow] [Flaky] Run node performance testing with pre-defined workloads TensorFlow workload
E2eNode Suite [sig-node] Node Performance Testing [Serial] [Slow] [Flaky] Run node performance testing with pre-defined workloads TensorFlow workload
E2eNode Suite [sig-node] PodPidsLimit [Serial] [Feature:SupportPodPidsLimit][NodeFeature:SupportPodPidsLimit] With config updated with pids feature enabled should set pids.max for Pod
E2eNode Suite [sig-node] PodPidsLimit [Serial] [Feature:SupportPodPidsLimit][NodeFeature:SupportPodPidsLimit] With config updated with pids feature enabled should set pids.max for Pod
E2eNode Suite [sig-node] PodPidsLimit [Serial] [Feature:SupportPodPidsLimit][NodeFeature:SupportPodPidsLimit] With config updated with pids feature enabled should set pids.max for Pod
E2eNode Suite [sig-node] Resource-usage [Serial] [Slow] regular resource usage tracking resource tracking for 0 pods per node [Benchmark]
E2eNode Suite [sig-node] Resource-usage [Serial] [Slow] regular resource usage tracking resource tracking for 0 pods per node [Benchmark]
E2eNode Suite [sig-node] Resource-usage [Serial] [Slow] regular resource usage tracking resource tracking for 0 pods per node [Benchmark]
E2eNode Suite [sig-node] Resource-usage [Serial] [Slow] regular resource usage tracking resource tracking for 10 pods per node
E2eNode Suite [sig-node] Resource-usage [Serial] [Slow] regular resource usage tracking resource tracking for 10 pods per node
E2eNode Suite [sig-node] Resource-usage [Serial] [Slow] regular resource usage tracking resource tracking for 10 pods per node
E2eNode Suite [sig-node] Resource-usage [Serial] [Slow] regular resource usage tracking resource tracking for 10 pods per node [Benchmark]
E2eNode Suite [sig-node] Resource-usage [Serial] [Slow] regular resource usage tracking resource tracking for 10 pods per node [Benchmark]
E2eNode Suite [sig-node] Resource-usage [Serial] [Slow] regular resource usage tracking resource tracking for 10 pods per node [Benchmark]
E2eNode Suite [sig-node] Resource-usage [Serial] [Slow] regular resource usage tracking resource tracking for 105 pods per node [Benchmark]
E2eNode Suite [sig-node] Resource-usage [Serial] [Slow] regular resource usage tracking resource tracking for 105 pods per node [Benchmark]
E2eNode Suite [sig-node] Resource-usage [Serial] [Slow] regular resource usage tracking resource tracking for 105 pods per node [Benchmark]
E2eNode Suite [sig-node] Resource-usage [Serial] [Slow] regular resource usage tracking resource tracking for 35 pods per node [Benchmark]
E2eNode Suite [sig-node] Resource-usage [Serial] [Slow] regular resource usage tracking resource tracking for 35 pods per node [Benchmark]
E2eNode Suite [sig-node] Resource-usage [Serial] [Slow] regular resource usage tracking resource tracking for 35 pods per node [Benchmark]
E2eNode Suite [sig-node] RuntimeClass should reject a Pod requesting a RuntimeClass with an unconfigured handler
E2eNode Suite [sig-node] RuntimeClass should reject a Pod requesting a RuntimeClass with an unconfigured handler
E2eNode Suite [sig-node] RuntimeClass should reject a Pod requesting a RuntimeClass with an unconfigured handler
E2eNode Suite [sig-node] RuntimeClass should reject a Pod requesting a deleted RuntimeClass
E2eNode Suite [sig-node] RuntimeClass should reject a Pod requesting a deleted RuntimeClass
E2eNode Suite [sig-node] RuntimeClass should reject a Pod requesting a deleted RuntimeClass
E2eNode Suite [sig-node] RuntimeClass should reject a Pod requesting a non-existent RuntimeClass
E2eNode Suite [sig-node] RuntimeClass should reject a Pod requesting a non-existent RuntimeClass
E2eNode Suite [sig-node] RuntimeClass should reject a Pod requesting a non-existent RuntimeClass
E2eNode Suite [sig-node] RuntimeClass should run a Pod requesting a RuntimeClass with a configured handler [NodeFeature:RuntimeHandler]
E2eNode Suite [sig-node] RuntimeClass should run a Pod requesting a RuntimeClass with a configured handler [NodeFeature:RuntimeHandler]
E2eNode Suite [sig-node] RuntimeClass should run a Pod requesting a RuntimeClass with a configured handler [NodeFeature:RuntimeHandler]
E2eNode Suite [sig-storage] ConfigMap Should fail non-optional pod creation due to configMap object does not exist [Slow]
E2eNode Suite [sig-storage] ConfigMap Should fail non-optional pod creation due to configMap object does not exist [Slow]
E2eNode Suite [sig-storage] ConfigMap Should fail non-optional pod creation due to configMap object does not exist [Slow]
E2eNode Suite [sig-storage] ConfigMap Should fail non-optional pod creation due to the key in the configMap object does not exist [Slow]
E2eNode Suite [sig-storage] ConfigMap Should fail non-optional pod creation due to the key in the configMap object does not exist [Slow]
E2eNode Suite [sig-storage] ConfigMap Should fail non-optional pod creation due to the key in the configMap object does not exist [Slow]
E2eNode Suite [sig-storage] ConfigMap should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
E2eNode Suite [sig-storage] ConfigMap should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
E2eNode Suite [sig-storage] ConfigMap should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
E2eNode Suite [sig-storage] ConfigMap should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup]
E2eNode Suite [sig-storage] ConfigMap should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup]
E2eNode Suite [sig-storage] ConfigMap should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup]
E2eNode Suite [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
E2eNode Suite [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
E2eNode Suite [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
E2eNode Suite [sig-storage] Downward API volume should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]
E2eNode Suite [sig-storage] Downward API volume should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]
E2eNode Suite [sig-storage] Downward API volume should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]
E2eNode Suite [sig-storage] Downward API volume should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup]
E2eNode Suite [sig-storage] Downward API volume should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup]
E2eNode Suite [sig-storage] Downward API volume should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup]
E2eNode Suite [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]
E2eNode Suite [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]
E2eNode Suite [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]
E2eNode Suite [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] files with FSGroup ownership should support (root,0644,tmpfs)
E2eNode Suite [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] files with FSGroup ownership should support (root,0644,tmpfs)
E2eNode Suite [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] files with FSGroup ownership should support (root,0644,tmpfs)
E2eNode Suite [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] new files should be created with FSGroup ownership when container is non-root
E2eNode Suite [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] new files should be created with FSGroup ownership when container is non-root
E2eNode Suite [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] new files should be created with FSGroup ownership when container is non-root
E2eNode Suite [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] new files should be created with FSGroup ownership when container is root
E2eNode Suite [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] new files should be created with FSGroup ownership when container is root
E2eNode Suite [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] new files should be created with FSGroup ownership when container is root
E2eNode Suite [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] nonexistent volume subPath should have the correct mode and owner using FSGroup
E2eNode Suite [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] nonexistent volume subPath should have the correct mode and owner using FSGroup
E2eNode Suite [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] nonexistent volume subPath should have the correct mode and owner using FSGroup
E2eNode Suite [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] volume on default medium should have the correct mode using FSGroup
E2eNode Suite [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] volume on default medium should have the correct mode using FSGroup
E2eNode Suite [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] volume on default medium should have the correct mode using FSGroup
E2eNode Suite [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] volume on tmpfs should have the correct mode using FSGroup
E2eNode Suite [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] volume on tmpfs should have the correct mode using FSGroup
E2eNode Suite [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] volume on tmpfs should have the correct mode using FSGroup
E2eNode Suite [sig-storage] GCP Volumes GlusterFS should be mountable
E2eNode Suite [sig-storage] GCP Volumes GlusterFS should be mountable
E2eNode Suite [sig-storage] GCP Volumes GlusterFS should be mountable
E2eNode Suite [sig-storage] GCP Volumes NFSv3 should be mountable for NFSv3
E2eNode Suite [sig-storage] GCP Volumes NFSv3 should be mountable for NFSv3
E2eNode Suite [sig-storage] GCP Volumes NFSv3 should be mountable for NFSv3
E2eNode Suite [sig-storage] GCP Volumes NFSv4 should be mountable for NFSv4
E2eNode Suite [sig-storage] GCP Volumes NFSv4 should be mountable for NFSv4
E2eNode Suite [sig-storage] GCP Volumes NFSv4 should be mountable for NFSv4
E2eNode Suite [sig-storage] Projected configMap Should fail non-optional pod creation due to configMap object does not exist [Slow]
E2eNode Suite [sig-storage] Projected configMap Should fail non-optional pod creation due to configMap object does not exist [Slow]
E2eNode Suite [sig-storage] Projected configMap Should fail non-optional pod creation due to configMap object does not exist [Slow]
E2eNode Suite [sig-storage] Projected configMap Should fail non-optional pod creation due to the key in the configMap object does not exist [Slow]
E2eNode Suite [sig-storage] Projected configMap Should fail non-optional pod creation due to the key in the configMap object does not exist [Slow]
E2eNode Suite [sig-storage] Projected configMap Should fail non-optional pod creation due to the key in the configMap object does not exist [Slow]
E2eNode Suite [sig-storage] Projected configMap should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
E2eNode Suite [sig-storage] Projected configMap should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
E2eNode Suite [sig-storage] Projected configMap should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
E2eNode Suite [sig-storage] Projected configMap should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup]
E2eNode Suite [sig-storage] Projected configMap should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup]
E2eNode Suite [sig-storage] Projected configMap should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup]
E2eNode Suite [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
E2eNode Suite [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
E2eNode Suite [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
E2eNode Suite [sig-storage] Projected downwardAPI should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]
E2eNode Suite [sig-storage] Projected downwardAPI should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]
E2eNode Suite [sig-storage] Projected downwardAPI should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]
E2eNode Suite [sig-storage] Projected downwardAPI should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup]
E2eNode Suite [sig-storage] Projected downwardAPI should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup]
E2eNode Suite [sig-storage] Projected downwardAPI should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup]
E2eNode Suite [sig-storage] Projected secret Should fail non-optional pod creation due to secret object does not exist [Slow]
E2eNode Suite [sig-storage] Projected secret Should fail non-optional pod creation due to secret object does not exist [Slow]
E2eNode Suite [sig-storage] Projected secret Should fail non-optional pod creation due to secret object does not exist [Slow]
E2eNode Suite [sig-storage] Projected secret Should fail non-optional pod creation due to the key in the secret object does not exist [Slow]
E2eNode Suite [sig-storage] Projected secret Should fail non-optional pod creation due to the key in the secret object does not exist [Slow]
E2eNode Suite [sig-storage] Projected secret Should fail non-optional pod creation due to the key in the secret object does not exist [Slow]
E2eNode Suite [sig-storage] Secrets Should fail non-optional pod creation due to secret object does not exist [Slow]
E2eNode Suite [sig-storage] Secrets Should fail non-optional pod creation due to secret object does not exist [Slow]
E2eNode Suite [sig-storage] Secrets Should fail non-optional pod creation due to secret object does not exist [Slow]
E2eNode Suite [sig-storage] Secrets Should fail non-optional pod creation due to the key in the secret object does not exist [Slow]
E2eNode Suite [sig-storage] Secrets Should fail non-optional pod creation due to the key in the secret object does not exist [Slow]
E2eNode Suite [sig-storage] Secrets Should fail non-optional pod creation due to the key in the secret object does not exist [Slow]