Result | FAILURE |
Tests | 3 failed / 877 succeeded |
Started | |
Elapsed | 36m35s |
Revision | master |
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-node\]\sMount\spropagation\sshould\spropagate\smounts\swithin\sdefined\sscopes$'
test/e2e/framework/pod/pod_client.go:107 k8s.io/kubernetes/test/e2e/framework/pod.(*PodClient).CreateSync(0xc0019757a0, 0x5?) test/e2e/framework/pod/pod_client.go:107 +0x94 k8s.io/kubernetes/test/e2e/node.glob..func7.1() test/e2e/node/mount_propagation.go:120 +0x2b9from junit_01.xml
[BeforeEach] [sig-node] Mount propagation set up framework | framework.go:178 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/12/23 02:56:26.186�[0m Jan 12 02:56:26.186: INFO: >>> kubeConfig: /root/.kube/config �[1mSTEP:�[0m Building a namespace api object, basename mount-propagation �[38;5;243m01/12/23 02:56:26.187�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/12/23 02:56:26.49�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/12/23 02:56:26.683�[0m [BeforeEach] [sig-node] Mount propagation test/e2e/framework/metrics/init/init.go:31 [It] should propagate mounts within defined scopes test/e2e/node/mount_propagation.go:86 Jan 12 02:56:27.076: INFO: Waiting up to 5m0s for pod "master" in namespace "mount-propagation-8493" to be "running and ready" Jan 12 02:56:27.174: INFO: Pod "master": Phase="Pending", Reason="", readiness=false. Elapsed: 97.656052ms Jan 12 02:56:27.174: INFO: The phase of Pod master is Pending, waiting for it to be Running (with Ready = true) Jan 12 02:56:29.273: INFO: Pod "master": Phase="Pending", Reason="", readiness=false. Elapsed: 2.196458669s Jan 12 02:56:29.273: INFO: The phase of Pod master is Pending, waiting for it to be Running (with Ready = true) Jan 12 02:56:31.272: INFO: Pod "master": Phase="Pending", Reason="", readiness=false. Elapsed: 4.195889458s Jan 12 02:56:31.272: INFO: The phase of Pod master is Pending, waiting for it to be Running (with Ready = true) Jan 12 02:56:33.275: INFO: Pod "master": Phase="Pending", Reason="", readiness=false. Elapsed: 6.198628846s Jan 12 02:56:33.275: INFO: The phase of Pod master is Pending, waiting for it to be Running (with Ready = true) Jan 12 02:56:35.274: INFO: Pod "master": Phase="Pending", Reason="", readiness=false. Elapsed: 8.197160957s Jan 12 02:56:35.274: INFO: The phase of Pod master is Pending, waiting for it to be Running (with Ready = true) Jan 12 02:56:37.273: INFO: Pod "master": Phase="Pending", Reason="", readiness=false. Elapsed: 10.196281214s Jan 12 02:56:37.273: INFO: The phase of Pod master is Pending, waiting for it to be Running (with Ready = true) Jan 12 02:56:39.273: INFO: Pod "master": Phase="Pending", Reason="", readiness=false. Elapsed: 12.196265437s Jan 12 02:56:39.273: INFO: The phase of Pod master is Pending, waiting for it to be Running (with Ready = true) Jan 12 02:56:41.272: INFO: Pod "master": Phase="Pending", Reason="", readiness=false. Elapsed: 14.1961308s Jan 12 02:56:41.273: INFO: The phase of Pod master is Pending, waiting for it to be Running (with Ready = true) Jan 12 02:56:43.273: INFO: Pod "master": Phase="Pending", Reason="", readiness=false. Elapsed: 16.196192561s Jan 12 02:56:43.273: INFO: The phase of Pod master is Pending, waiting for it to be Running (with Ready = true) Jan 12 02:56:45.273: INFO: Pod "master": Phase="Pending", Reason="", readiness=false. Elapsed: 18.196307257s Jan 12 02:56:45.273: INFO: The phase of Pod master is Pending, waiting for it to be Running (with Ready = true) Jan 12 02:56:47.273: INFO: Pod "master": Phase="Pending", Reason="", readiness=false. Elapsed: 20.196317668s Jan 12 02:56:47.273: INFO: The phase of Pod master is Pending, waiting for it to be Running (with Ready = true) Jan 12 02:56:49.272: INFO: Pod "master": Phase="Pending", Reason="", readiness=false. Elapsed: 22.195630513s Jan 12 02:56:49.272: INFO: The phase of Pod master is Pending, waiting for it to be Running (with Ready = true) Jan 12 02:56:51.272: INFO: Pod "master": Phase="Pending", Reason="", readiness=false. Elapsed: 24.195926149s Jan 12 02:56:51.272: INFO: The phase of Pod master is Pending, waiting for it to be Running (with Ready = true) Jan 12 02:56:53.273: INFO: Pod "master": Phase="Pending", Reason="", readiness=false. Elapsed: 26.196305539s Jan 12 02:56:53.273: INFO: The phase of Pod master is Pending, waiting for it to be Running (with Ready = true) Jan 12 02:56:55.273: INFO: Pod "master": Phase="Pending", Reason="", readiness=false. Elapsed: 28.196871513s Jan 12 02:56:55.273: INFO: The phase of Pod master is Pending, waiting for it to be Running (with Ready = true) Jan 12 02:56:57.273: INFO: Pod "master": Phase="Pending", Reason="", readiness=false. Elapsed: 30.196337228s Jan 12 02:56:57.273: INFO: The phase of Pod master is Pending, waiting for it to be Running (with Ready = true) Jan 12 02:56:59.273: INFO: Pod "master": Phase="Pending", Reason="", readiness=false. Elapsed: 32.196297414s Jan 12 02:56:59.273: INFO: The phase of Pod master is Pending, waiting for it to be Running (with Ready = true) Jan 12 02:57:01.273: INFO: Pod "master": Phase="Pending", Reason="", readiness=false. Elapsed: 34.196419022s Jan 12 02:57:01.273: INFO: The phase of Pod master is Pending, waiting for it to be Running (with Ready = true) Jan 12 02:57:03.273: INFO: Pod "master": Phase="Pending", Reason="", readiness=false. Elapsed: 36.196369278s Jan 12 02:57:03.273: INFO: The phase of Pod master is Pending, waiting for it to be Running (with Ready = true) Jan 12 02:57:05.273: INFO: Pod "master": Phase="Pending", Reason="", readiness=false. Elapsed: 38.196307273s Jan 12 02:57:05.273: INFO: The phase of Pod master is Pending, waiting for it to be Running (with Ready = true) Jan 12 02:57:07.273: INFO: Pod "master": Phase="Pending", Reason="", readiness=false. Elapsed: 40.196246253s Jan 12 02:57:07.273: INFO: The phase of Pod master is Pending, waiting for it to be Running (with Ready = true) Jan 12 02:57:09.273: INFO: Pod "master": Phase="Pending", Reason="", readiness=false. Elapsed: 42.196270176s Jan 12 02:57:09.273: INFO: The phase of Pod master is Pending, waiting for it to be Running (with Ready = true) Jan 12 02:57:11.272: INFO: Pod "master": Phase="Pending", Reason="", readiness=false. Elapsed: 44.19603796s Jan 12 02:57:11.272: INFO: The phase of Pod master is Pending, waiting for it to be Running (with Ready = true) Jan 12 02:57:13.272: INFO: Pod "master": Phase="Pending", Reason="", readiness=false. Elapsed: 46.195459371s Jan 12 02:57:13.272: INFO: The phase of Pod master is Pending, waiting for it to be Running (with Ready = true) Jan 12 02:57:15.273: INFO: Pod "master": Phase="Pending", Reason="", readiness=false. Elapsed: 48.196360778s Jan 12 02:57:15.273: INFO: The phase of Pod master is Pending, waiting for it to be Running (with Ready = true) Jan 12 02:57:17.276: INFO: Pod "master": Phase="Pending", Reason="", readiness=false. Elapsed: 50.199227949s Jan 12 02:57:17.276: INFO: The phase of Pod master is Pending, waiting for it to be Running (with Ready = true) Jan 12 02:57:19.273: INFO: Pod "master": Phase="Pending", Reason="", readiness=false. Elapsed: 52.196287986s Jan 12 02:57:19.273: INFO: The phase of Pod master is Pending, waiting for it to be Running (with Ready = true) Jan 12 02:57:21.273: INFO: Pod "master": Phase="Pending", Reason="", readiness=false. Elapsed: 54.196281297s Jan 12 02:57:21.273: INFO: The phase of Pod master is Pending, waiting for it to be Running (with Ready = true) Jan 12 02:57:23.272: INFO: Pod "master": Phase="Pending", Reason="", readiness=false. Elapsed: 56.195961844s Jan 12 02:57:23.272: INFO: The phase of Pod master is Pending, waiting for it to be Running (with Ready = true) Jan 12 02:57:25.272: INFO: Pod "master": Phase="Pending", Reason="", readiness=false. Elapsed: 58.196124793s Jan 12 02:57:25.273: INFO: The phase of Pod master is Pending, waiting for it to be Running (with Ready = true) Jan 12 02:57:27.274: INFO: Pod "master": Phase="Pending", Reason="", readiness=false. Elapsed: 1m0.197170177s Jan 12 02:57:27.274: INFO: The phase of Pod master is Pending, waiting for it to be Running (with Ready = true) Jan 12 02:57:29.272: INFO: Pod "master": Phase="Pending", Reason="", readiness=false. Elapsed: 1m2.195215082s Jan 12 02:57:29.272: INFO: The phase of Pod master is Pending, waiting for it to be Running (with Ready = true) Jan 12 02:57:31.272: INFO: Pod "master": Phase="Pending", Reason="", readiness=false. Elapsed: 1m4.196071452s Jan 12 02:57:31.272: INFO: The phase of Pod master is Pending, waiting for it to be Running (with Ready = true) Jan 12 02:57:33.272: INFO: Pod "master": Phase="Pending", Reason="", readiness=false. Elapsed: 1m6.195706934s Jan 12 02:57:33.272: INFO: The phase of Pod master is Pending, waiting for it to be Running (with Ready = true) Jan 12 02:57:35.273: INFO: Pod "master": Phase="Pending", Reason="", readiness=false. Elapsed: 1m8.196202254s Jan 12 02:57:35.273: INFO: The phase of Pod master is Pending, waiting for it to be Running (with Ready = true) Jan 12 02:57:37.272: INFO: Pod "master": Phase="Pending", Reason="", readiness=false. Elapsed: 1m10.195898881s Jan 12 02:57:37.272: INFO: The phase of Pod master is Pending, waiting for it to be Running (with Ready = true) Jan 12 02:57:39.272: INFO: Pod "master": Phase="Pending", Reason="", readiness=false. Elapsed: 1m12.19533022s Jan 12 02:57:39.272: INFO: The phase of Pod master is Pending, waiting for it to be Running (with Ready = true) Jan 12 02:57:41.273: INFO: Pod "master": Phase="Pending", Reason="", readiness=false. Elapsed: 1m14.196301166s Jan 12 02:57:41.273: INFO: The phase of Pod master is Pending, waiting for it to be Running (with Ready = true) Jan 12 02:57:43.272: INFO: Pod "master": Phase="Running", Reason="", readiness=true. Elapsed: 1m16.195930775s Jan 12 02:57:43.272: INFO: The phase of Pod master is Running (Ready = true) Jan 12 02:57:43.272: INFO: Pod "master" satisfied condition "running and ready" Jan 12 02:57:43.471: INFO: Waiting up to 5m0s for pod "slave" in namespace "mount-propagation-8493" to be "running and ready" Jan 12 02:57:43.568: INFO: Pod "slave": Phase="Pending", Reason="", readiness=false. Elapsed: 97.471542ms Jan 12 02:57:43.568: INFO: The phase of Pod slave is Pending, waiting for it to be Running (with Ready = true) Jan 12 02:57:45.667: INFO: Pod "slave": Phase="Pending", Reason="", readiness=false. Elapsed: 2.19582019s Jan 12 02:57:45.667: INFO: The phase of Pod slave is Pending, waiting for it to be Running (with Ready = true) Jan 12 02:57:47.667: INFO: Pod "slave": Phase="Pending", Reason="", readiness=false. Elapsed: 4.196018466s Jan 12 02:57:47.667: INFO: The phase of Pod slave is Pending, waiting for it to be Running (with Ready = true) Jan 12 02:57:49.667: INFO: Pod "slave": Phase="Pending", Reason="", readiness=false. Elapsed: 6.195824475s Jan 12 02:57:49.667: INFO: The phase of Pod slave is Pending, waiting for it to be Running (with Ready = true) Jan 12 02:57:51.667: INFO: Pod "slave": Phase="Pending", Reason="", readiness=false. Elapsed: 8.195531202s Jan 12 02:57:51.667: INFO: The phase of Pod slave is Pending, waiting for it to be Running (with Ready = true) Jan 12 02:57:53.667: INFO: Pod "slave": Phase="Pending", Reason="", readiness=false. Elapsed: 10.196338726s Jan 12 02:57:53.667: INFO: The phase of Pod slave is Pending, waiting for it to be Running (with Ready = true) Jan 12 02:57:55.667: INFO: Pod "slave": Phase="Pending", Reason="", readiness=false. Elapsed: 12.19596051s Jan 12 02:57:55.667: INFO: The phase of Pod slave is Pending, waiting for it to be Running (with Ready = true) Jan 12 02:57:57.667: INFO: Pod "slave": Phase="Pending", Reason="", readiness=false. Elapsed: 14.195955436s Jan 12 02:57:57.667: INFO: The phase of Pod slave is Pending, waiting for it to be Running (with Ready = true) Jan 12 02:57:59.666: INFO: Pod "slave": Phase="Pending", Reason="", readiness=false. Elapsed: 16.195435969s Jan 12 02:57:59.666: INFO: The phase of Pod slave is Pending, waiting for it to be Running (with Ready = true) Jan 12 02:58:01.668: INFO: Pod "slave": Phase="Pending", Reason="", readiness=false. Elapsed: 18.196707533s Jan 12 02:58:01.668: INFO: The phase of Pod slave is Pending, waiting for it to be Running (with Ready = true) Jan 12 02:58:03.667: INFO: Pod "slave": Phase="Pending", Reason="", readiness=false. Elapsed: 20.19651919s Jan 12 02:58:03.668: INFO: The phase of Pod slave is Pending, waiting for it to be Running (with Ready = true) Jan 12 02:58:05.667: INFO: Pod "slave": Phase="Pending", Reason="", readiness=false. Elapsed: 22.196245963s Jan 12 02:58:05.667: INFO: The phase of Pod slave is Pending, waiting for it to be Running (with Ready = true) Jan 12 02:58:07.667: INFO: Pod "slave": Phase="Pending", Reason="", readiness=false. Elapsed: 24.195848625s Jan 12 02:58:07.667: INFO: The phase of Pod slave is Pending, waiting for it to be Running (with Ready = true) Jan 12 02:58:09.667: INFO: Pod "slave": Phase="Pending", Reason="", readiness=false. Elapsed: 26.195809337s Jan 12 02:58:09.667: INFO: The phase of Pod slave is Pending, waiting for it to be Running (with Ready = true) Jan 12 02:58:11.667: INFO: Pod "slave": Phase="Pending", Reason="", readiness=false. Elapsed: 28.196197206s Jan 12 02:58:11.667: INFO: The phase of Pod slave is Pending, waiting for it to be Running (with Ready = true) Jan 12 02:58:13.667: INFO: Pod "slave": Phase="Pending", Reason="", readiness=false. Elapsed: 30.196002824s Jan 12 02:58:13.667: INFO: The phase of Pod slave is Pending, waiting for it to be Running (with Ready = true) Jan 12 02:58:15.667: INFO: Pod "slave": Phase="Pending", Reason="", readiness=false. Elapsed: 32.196084045s Jan 12 02:58:15.667: INFO: The phase of Pod slave is Pending, waiting for it to be Running (with Ready = true) Jan 12 02:58:17.667: INFO: Pod "slave": Phase="Pending", Reason="", readiness=false. Elapsed: 34.195797644s Jan 12 02:58:17.667: INFO: The phase of Pod slave is Pending, waiting for it to be Running (with Ready = true) Jan 12 02:58:19.668: INFO: Pod "slave": Phase="Pending", Reason="", readiness=false. Elapsed: 36.197080269s Jan 12 02:58:19.668: INFO: The phase of Pod slave is Pending, waiting for it to be Running (with Ready = true) Jan 12 02:58:21.667: INFO: Pod "slave": Phase="Pending", Reason="", readiness=false. Elapsed: 38.19624515s Jan 12 02:58:21.667: INFO: The phase of Pod slave is Pending, waiting for it to be Running (with Ready = true) Jan 12 02:58:23.667: INFO: Pod "slave": Phase="Pending", Reason="", readiness=false. Elapsed: 40.195712319s Jan 12 02:58:23.667: INFO: The phase of Pod slave is Pending, waiting for it to be Running (with Ready = true) Jan 12 02:58:25.667: INFO: Pod "slave": Phase="Pending", Reason="", readiness=false. Elapsed: 42.196483884s Jan 12 02:58:25.667: INFO: The phase of Pod slave is Pending, waiting for it to be Running (with Ready = true) Jan 12 02:58:27.667: INFO: Pod "slave": Phase="Pending", Reason="", readiness=false. Elapsed: 44.196208153s Jan 12 02:58:27.667: INFO: The phase of Pod slave is Pending, waiting for it to be Running (with Ready = true) Jan 12 02:58:29.670: INFO: Pod "slave": Phase="Pending", Reason="", readiness=false. Elapsed: 46.199214997s Jan 12 02:58:29.670: INFO: The phase of Pod slave is Pending, waiting for it to be Running (with Ready = true) Jan 12 02:58:31.667: INFO: Pod "slave": Phase="Pending", Reason="", readiness=false. Elapsed: 48.195817464s Jan 12 02:58:31.667: INFO: The phase of Pod slave is Pending, waiting for it to be Running (with Ready = true) Jan 12 02:58:33.667: INFO: Pod "slave": Phase="Pending", Reason="", readiness=false. Elapsed: 50.19581321s Jan 12 02:58:33.667: INFO: The phase of Pod slave is Pending, waiting for it to be Running (with Ready = true) Jan 12 02:58:35.667: INFO: Pod "slave": Phase="Pending", Reason="", readiness=false. Elapsed: 52.19609226s Jan 12 02:58:35.667: INFO: The phase of Pod slave is Pending, waiting for it to be Running (with Ready = true) Jan 12 02:58:37.667: INFO: Pod "slave": Phase="Pending", Reason="", readiness=false. Elapsed: 54.196299649s Jan 12 02:58:37.667: INFO: The phase of Pod slave is Pending, waiting for it to be Running (with Ready = true) Jan 12 02:58:39.670: INFO: Pod "slave": Phase="Pending", Reason="", readiness=false. Elapsed: 56.198731446s Jan 12 02:58:39.670: INFO: The phase of Pod slave is Pending, waiting for it to be Running (with Ready = true) Jan 12 02:58:41.667: INFO: Pod "slave": Phase="Pending", Reason="", readiness=false. Elapsed: 58.196029597s Jan 12 02:58:41.667: INFO: The phase of Pod slave is Pending, waiting for it to be Running (with Ready = true) Jan 12 02:58:43.666: INFO: Pod "slave": Phase="Pending", Reason="", readiness=false. Elapsed: 1m0.195462273s Jan 12 02:58:43.666: INFO: The phase of Pod slave is Pending, waiting for it to be Running (with Ready = true) Jan 12 02:58:45.667: INFO: Pod "slave": Phase="Pending", Reason="", readiness=false. Elapsed: 1m2.196113454s Jan 12 02:58:45.667: INFO: The phase of Pod slave is Pending, waiting for it to be Running (with Ready = true) Jan 12 02:58:47.667: INFO: Pod "slave": Phase="Pending", Reason="", readiness=false. Elapsed: 1m4.196274106s Jan 12 02:58:47.667: INFO: The phase of Pod slave is Pending, waiting for it to be Running (with Ready = true) Jan 12 02:58:49.668: INFO: Pod "slave": Phase="Pending", Reason="", readiness=false. Elapsed: 1m6.196708543s Jan 12 02:58:49.668: INFO: The phase of Pod slave is Pending, waiting for it to be Running (with Ready = true) Jan 12 02:58:51.668: INFO: Pod "slave": Phase="Pending", Reason="", readiness=false. Elapsed: 1m8.196736979s Jan 12 02:58:51.668: INFO: The phase of Pod slave is Pending, waiting for it to be Running (with Ready = true) Jan 12 02:58:53.667: INFO: Pod "slave": Phase="Pending", Reason="", readiness=false. Elapsed: 1m10.196486744s Jan 12 02:58:53.667: INFO: The phase of Pod slave is Pending, waiting for it to be Running (with Ready = true) Jan 12 02:58:55.667: INFO: Pod "slave": Phase="Pending", Reason="", readiness=false. Elapsed: 1m12.196397039s Jan 12 02:58:55.667: INFO: The phase of Pod slave is Pending, waiting for it to be Running (with Ready = true) Jan 12 02:58:57.667: INFO: Pod "slave": Phase="Pending", Reason="", readiness=false. Elapsed: 1m14.195644668s Jan 12 02:58:57.667: INFO: The phase of Pod slave is Pending, waiting for it to be Running (with Ready = true) Jan 12 02:58:59.667: INFO: Pod "slave": Phase="Pending", Reason="", readiness=false. Elapsed: 1m16.196170103s Jan 12 02:58:59.667: INFO: The phase of Pod slave is Pending, waiting for it to be Running (with Ready = true) Jan 12 02:59:01.668: INFO: Pod "slave": Phase="Pending", Reason="", readiness=false. Elapsed: 1m18.196622824s Jan 12 02:59:01.668: INFO: The phase of Pod slave is Pending, waiting for it to be Running (with Ready = true) Jan 12 02:59:03.667: INFO: Pod "slave": Phase="Pending", Reason="", readiness=false. Elapsed: 1m20.195584467s Jan 12 02:59:03.667: INFO: The phase of Pod slave is Pending, waiting for it to be Running (with Ready = true) Jan 12 02:59:05.667: INFO: Pod "slave": Phase="Pending", Reason="", readiness=false. Elapsed: 1m22.196209762s Jan 12 02:59:05.667: INFO: The phase of Pod slave is Pending, waiting for it to be Running (with Ready = true) Jan 12 02:59:07.667: INFO: Pod "slave": Phase="Pending", Reason="", readiness=false. Elapsed: 1m24.196507299s Jan 12 02:59:07.668: INFO: The phase of Pod slave is Pending, waiting for it to be Running (with Ready = true) Jan 12 02:59:09.668: INFO: Pod "slave": Phase="Pending", Reason="", readiness=false. Elapsed: 1m26.196536102s Jan 12 02:59:09.668: INFO: The phase of Pod slave is Pending, waiting for it to be Running (with Ready = true) Jan 12 02:59:11.667: INFO: Pod "slave": Phase="Pending", Reason="", readiness=false. Elapsed: 1m28.196237019s Jan 12 02:59:11.667: INFO: The phase of Pod slave is Pending, waiting for it to be Running (with Ready = true) Jan 12 02:59:13.667: INFO: Pod "slave": Phase="Pending", Reason="", readiness=false. Elapsed: 1m30.19595743s Jan 12 02:59:13.667: INFO: The phase of Pod slave is Pending, waiting for it to be Running (with Ready = true) Jan 12 02:59:15.667: INFO: Pod "slave": Phase="Pending", Reason="", readiness=false. Elapsed: 1m32.196026892s Jan 12 02:59:15.667: INFO: The phase of Pod slave is Pending, waiting for it to be Running (with Ready = true) Jan 12 02:59:17.667: INFO: Pod "slave": Phase="Pending", Reason="", readiness=false. Elapsed: 1m34.196454278s Jan 12 02:59:17.667: INFO: The phase of Pod slave is Pending, waiting for it to be Running (with Ready = true) Jan 12 02:59:19.668: INFO: Pod "slave": Phase="Pending", Reason="", readiness=false. Elapsed: 1m36.196622819s Jan 12 02:59:19.668: INFO: The phase of Pod slave is Pending, waiting for it to be Running (with Ready = true) Jan 12 02:59:21.667: INFO: Pod "slave": Phase="Pending", Reason="", readiness=false. Elapsed: 1m38.196066504s Jan 12 02:59:21.667: INFO: The phase of Pod slave is Pending, waiting for it to be Running (with Ready = true) Jan 12 02:59:23.667: INFO: Pod "slave": Phase="Pending", Reason="", readiness=false. Elapsed: 1m40.196367887s Jan 12 02:59:23.667: INFO: The phase of Pod slave is Pending, waiting for it to be Running (with Ready = true) Jan 12 02:59:25.667: INFO: Pod "slave": Phase="Pending", Reason="", readiness=false. Elapsed: 1m42.196163737s Jan 12 02:59:25.667: INFO: The phase of Pod slave is Pending, waiting for it to be Running (with Ready = true) Jan 12 02:59:27.668: INFO: Pod "slave": Phase="Pending", Reason="", readiness=false. Elapsed: 1m44.196540488s Jan 12 02:59:27.668: INFO: The phase of Pod slave is Pending, waiting for it to be Running (with Ready = true) Jan 12 02:59:29.668: INFO: Pod "slave": Phase="Pending", Reason="", readiness=false. Elapsed: 1m46.196983097s Jan 12 02:59:29.668: INFO: The phase of Pod slave is Pending, waiting for it to be Running (with Ready = true) Jan 12 02:59:31.667: INFO: Pod "slave": Phase="Pending", Reason="", readiness=false. Elapsed: 1m48.195950879s Jan 12 02:59:31.667: INFO: The phase of Pod slave is Pending, waiting for it to be Running (with Ready = true) Jan 12 02:59:33.667: INFO: Pod "slave": Phase="Pending", Reason="", readiness=false. Elapsed: 1m50.196281161s Jan 12 02:59:33.667: INFO: The phase of Pod slave is Pending, waiting for it to be Running (with Ready = true) Jan 12 02:59:35.668: INFO: Pod "slave": Phase="Pending", Reason="", readiness=false. Elapsed: 1m52.19653739s Jan 12 02:59:35.668: INFO: The phase of Pod slave is Pending, waiting for it to be Running (with Ready = true) Jan 12 02:59:37.669: INFO: Pod "slave": Phase="Pending", Reason="", readiness=false. Elapsed: 1m54.197865409s Jan 12 02:59:37.669: INFO: The phase of Pod slave is Pending, waiting for it to be Running (with Ready = true) Jan 12 02:59:39.667: INFO: Pod "slave": Phase="Pending", Reason="", readiness=false. Elapsed: 1m56.195706805s Jan 12 02:59:39.667: INFO: The phase of Pod slave is Pending, waiting for it to be Running (with Ready = true) Jan 12 02:59:41.667: INFO: Pod "slave": Phase="Pending", Reason="", readiness=false. Elapsed: 1m58.195595397s Jan 12 02:59:41.667: INFO: The phase of Pod slave is Pending, waiting for it to be Running (with Ready = true) Jan 12 02:59:43.667: INFO: Pod "slave": Phase="Pending", Reason="", readiness=false. Elapsed: 2m0.196083152s Jan 12 02:59:43.667: INFO: The phase of Pod slave is Pending, waiting for it to be Running (with Ready = true) Jan 12 02:59:45.667: INFO: Pod "slave": Phase="Pending", Reason="", readiness=false. Elapsed: 2m2.196217579s Jan 12 02:59:45.667: INFO: The phase of Pod slave is Pending, waiting for it to be Running (with Ready = true) Jan 12 02:59:47.667: INFO: Pod "slave": Phase="Pending", Reason="", readiness=false. Elapsed: 2m4.19553893s Jan 12 02:59:47.667: INFO: The phase of Pod slave is Pending, waiting for it to be Running (with Ready = true) Jan 12 02:59:49.668: INFO: Pod "slave": Phase="Pending", Reason="", readiness=false. Elapsed: 2m6.196683006s Jan 12 02:59:49.668: INFO: The phase of Pod slave is Pending, waiting for it to be Running (with Ready = true) Jan 12 02:59:51.667: INFO: Pod "slave": Phase="Pending", Reason="", readiness=false. Elapsed: 2m8.196320805s Jan 12 02:59:51.667: INFO: The phase of Pod slave is Pending, waiting for it to be Running (with Ready = true) Jan 12 02:59:53.667: INFO: Pod "slave": Phase="Pending", Reason="", readiness=false. Elapsed: 2m10.196334899s Jan 12 02:59:53.667: INFO: The phase of Pod slave is Pending, waiting for it to be Running (with Ready = true) Jan 12 02:59:55.667: INFO: Pod "slave": Phase="Pending", Reason="", readiness=false. Elapsed: 2m12.196440442s Jan 12 02:59:55.667: INFO: The phase of Pod slave is Pending, waiting for it to be Running (with Ready = true) Jan 12 02:59:57.667: INFO: Pod "slave": Phase="Pending", Reason="", readiness=false. Elapsed: 2m14.195708349s Jan 12 02:59:57.667: INFO: The phase of Pod slave is Pending, waiting for it to be Running (with Ready = true) Jan 12 02:59:59.667: INFO: Pod "slave": Phase="Pending", Reason="", readiness=false. Elapsed: 2m16.195842846s Jan 12 02:59:59.667: INFO: The phase of Pod slave is Pending, waiting for it to be Running (with Ready = true) Jan 12 03:00:01.667: INFO: Pod "slave": Phase="Pending", Reason="", readiness=false. Elapsed: 2m18.196483865s Jan 12 03:00:01.667: INFO: The phase of Pod slave is Pending, waiting for it to be Running (with Ready = true) Jan 12 03:00:03.667: INFO: Pod "slave": Phase="Pending", Reason="", readiness=false. Elapsed: 2m20.196063569s Jan 12 03:00:03.667: INFO: The phase of Pod slave is Pending, waiting for it to be Running (with Ready = true) Jan 12 03:00:05.668: INFO: Pod "slave": Phase="Pending", Reason="", readiness=false. Elapsed: 2m22.196708787s Jan 12 03:00:05.668: INFO: The phase of Pod slave is Pending, waiting for it to be Running (with Ready = true) Jan 12 03:00:07.667: INFO: Pod "slave": Phase="Pending", Reason="", readiness=false. Elapsed: 2m24.195841399s Jan 12 03:00:07.667: INFO: The phase of Pod slave is Pending, waiting for it to be Running (with Ready = true) Jan 12 03:00:09.667: INFO: Pod "slave": Phase="Pending", Reason="", readiness=false. Elapsed: 2m26.195891618s Jan 12 03:00:09.667: INFO: The phase of Pod slave is Pending, waiting for it to be Running (with Ready = true) Jan 12 03:00:11.667: INFO: Pod "slave": Phase="Pending", Reason="", readiness=false. Elapsed: 2m28.196465751s Jan 12 03:00:11.667: INFO: The phase of Pod slave is Pending, waiting for it to be Running (with Ready = true) Jan 12 03:00:13.667: INFO: Pod "slave": Phase="Pending", Reason="", readiness=false. Elapsed: 2m30.195809849s Jan 12 03:00:13.667: INFO: The phase of Pod slave is Pending, waiting for it to be Running (with Ready = true) Jan 12 03:00:15.668: INFO: Pod "slave": Phase="Pending", Reason="", readiness=false. Elapsed: 2m32.196689926s Jan 12 03:00:15.668: INFO: The phase of Pod slave is Pending, waiting for it to be Running (with Ready = true) Jan 12 03:00:17.666: INFO: Pod "slave": Phase="Pending", Reason="", readiness=false. Elapsed: 2m34.19548544s Jan 12 03:00:17.666: INFO: The phase of Pod slave is Pending, waiting for it to be Running (with Ready = true) Jan 12 03:00:19.667: INFO: Pod "slave": Phase="Pending", Reason="", readiness=false. Elapsed: 2m36.196282257s Jan 12 03:00:19.667: INFO: The phase of Pod slave is Pending, waiting for it to be Running (with Ready = true) Jan 12 03:00:21.667: INFO: Pod "slave": Phase="Pending", Reason="", readiness=false. Elapsed: 2m38.195945607s Jan 12 03:00:21.667: INFO: The phase of Pod slave is Pending, waiting for it to be Running (with Ready = true) Jan 12 03:00:23.667: INFO: Pod "slave": Phase="Pending", Reason="", readiness=false. Elapsed: 2m40.195599465s Jan 12 03:00:23.667: INFO: The phase of Pod slave is Pending, waiting for it to be Running (with Ready = true) Jan 12 03:00:25.667: INFO: Pod "slave": Phase="Pending", Reason="", readiness=false. Elapsed: 2m42.196237666s Jan 12 03:00:25.667: INFO: The phase of Pod slave is Pending, waiting for it to be Running (with Ready = true) Jan 12 03:00:27.667: INFO: Pod "slave": Phase="Pending", Reason="", readiness=false. Elapsed: 2m44.19561203s Jan 12 03:00:27.667: INFO: The phase of Pod slave is Pending, waiting for it to be Running (with Ready = true) Jan 12 03:00:29.668: INFO: Pod "slave": Phase="Pending", Reason="", readiness=false. Elapsed: 2m46.196995691s Jan 12 03:00:29.668: INFO: The phase of Pod slave is Pending, waiting for it to be Running (with Ready = true) Jan 12 03:00:31.667: INFO: Pod "slave": Phase="Pending", Reason="", readiness=false. Elapsed: 2m48.196009341s Jan 12 03:00:31.667: INFO: The phase of Pod slave is Pending, waiting for it to be Running (with Ready = true) Jan 12 03:00:33.667: INFO: Pod "slave": Phase="Pending", Reason="", readiness=false. Elapsed: 2m50.195588736s Jan 12 03:00:33.667: INFO: The phase of Pod slave is Pending, waiting for it to be Running (with Ready = true) Jan 12 03:00:35.667: INFO: Pod "slave": Phase="Pending", Reason="", readiness=false. Elapsed: 2m52.196495075s Jan 12 03:00:35.667: INFO: The phase of Pod slave is Pending, waiting for it to be Running (with Ready = true) Jan 12 03:00:37.667: INFO: Pod "slave": Phase="Pending", Reason="", readiness=false. Elapsed: 2m54.196395872s Jan 12 03:00:37.667: INFO: The phase of Pod slave is Pending, waiting for it to be Running (with Ready = true) Jan 12 03:00:39.668: INFO: Pod "slave": Phase="Pending", Reason="", readiness=false. Elapsed: 2m56.196726773s Jan 12 03:00:39.668: INFO: The phase of Pod slave is Pending, waiting for it to be Running (with Ready = true) Jan 12 03:00:41.667: INFO: Pod "slave": Phase="Pending", Reason="", readiness=false. Elapsed: 2m58.196002832s Jan 12 03:00:41.667: INFO: The phase of Pod slave is Pending, waiting for it to be Running (with Ready = true) Jan 12 03:00:43.667: INFO: Pod "slave": Phase="Pending", Reason="", readiness=false. Elapsed: 3m0.196475676s Jan 12 03:00:43.667: INFO: The phase of Pod slave is Pending, waiting for it to be Running (with Ready = true) Jan 12 03:00:45.667: INFO: Pod "slave": Phase="Pending", Reason="", readiness=false. Elapsed: 3m2.196162832s Jan 12 03:00:45.667: INFO: The phase of Pod slave is Pending, waiting for it to be Running (with Ready = true) Jan 12 03:00:47.667: INFO: Pod "slave": Phase="Pending", Reason="", readiness=false. Elapsed: 3m4.19625113s Jan 12 03:00:47.667: INFO: The phase of Pod slave is Pending, waiting for it to be Running (with Ready = true) Jan 12 03:00:49.667: INFO: Pod "slave": Phase="Pending", Reason="", readiness=false. Elapsed: 3m6.196411799s Jan 12 03:00:49.667: INFO: The phase of Pod slave is Pending, waiting for it to be Running (with Ready = true) Jan 12 03:00:51.668: INFO: Pod "slave": Phase="Pending", Reason="", readiness=false. Elapsed: 3m8.196527749s Jan 12 03:00:51.668: INFO: The phase of Pod slave is Pending, waiting for it to be Running (with Ready = true) Jan 12 03:00:53.668: INFO: Pod "slave": Phase="Pending", Reason="", readiness=false. Elapsed: 3m10.196811374s Jan 12 03:00:53.668: INFO: The phase of Pod slave is Pending, waiting for it to be Running (with Ready = true) Jan 12 03:00:55.667: INFO: Pod "slave": Phase="Pending", Reason="", readiness=false. Elapsed: 3m12.195835956s Jan 12 03:00:55.667: INFO: The phase of Pod slave is Pending, waiting for it to be Running (with Ready = true) Jan 12 03:00:57.667: INFO: Pod "slave": Phase="Pending", Reason="", readiness=false. Elapsed: 3m14.196184994s Jan 12 03:00:57.667: INFO: The phase of Pod slave is Pending, waiting for it to be Running (with Ready = true) Jan 12 03:00:59.667: INFO: Pod "slave": Phase="Pending", Reason="", readiness=false. Elapsed: 3m16.195934146s Jan 12 03:00:59.667: INFO: The phase of Pod slave is Pending, waiting for it to be Running (with Ready = true) Jan 12 03:01:01.668: INFO: Pod "slave": Phase="Pending", Reason="", readiness=false. Elapsed: 3m18.196690157s Jan 12 03:01:01.668: INFO: The phase of Pod slave is Pending, waiting for it to be Running (with Ready = true) Jan 12 03:01:03.668: INFO: Pod "slave": Phase="Pending", Reason="", readiness=false. Elapsed: 3m20.196940223s Jan 12 03:01:03.668: INFO: The phase of Pod slave is Pending, waiting for it to be Running (with Ready = true) Jan 12 03:01:05.667: INFO: Pod "slave": Phase="Pending", Reason="", readiness=false. Elapsed: 3m22.195877174s Jan 12 03:01:05.667: INFO: The phase of Pod slave is Pending, waiting for it to be Running (with Ready = true) Jan 12 03:01:07.667: INFO: Pod "slave": Phase="Pending", Reason="", readiness=false. Elapsed: 3m24.196086869s Jan 12 03:01:07.667: INFO: The phase of Pod slave is Pending, waiting for it to be Running (with Ready = true) Jan 12 03:01:09.670: INFO: Pod "slave": Phase="Pending", Reason="", readiness=false. Elapsed: 3m26.198865899s Jan 12 03:01:09.670: INFO: The phase of Pod slave is Pending, waiting for it to be Running (with Ready = true) Jan 12 03:01:11.668: INFO: Pod "slave": Phase="Pending", Reason="", readiness=false. Elapsed: 3m28.196525129s Jan 12 03:01:11.668: INFO: The phase of Pod slave is Pending, waiting for it to be Running (with Ready = true) Jan 12 03:01:13.668: INFO: Pod "slave": Phase="Pending", Reason="", readiness=false. Elapsed: 3m30.196532697s Jan 12 03:01:13.668: INFO: The phase of Pod slave is Pending, waiting for it to be Running (with Ready = true) Jan 12 03:01:15.667: INFO: Pod "slave": Phase="Pending", Reason="", readiness=false. Elapsed: 3m32.195692943s Jan 12 03:01:15.667: INFO: The phase of Pod slave is Pending, waiting for it to be Running (with Ready = true) Jan 12 03:01:17.667: INFO: Pod "slave": Phase="Pending", Reason="", readiness=false. Elapsed: 3m34.196467129s Jan 12 03:01:17.667: INFO: The phase of Pod slave is Pending, waiting for it to be Running (with Ready = true) Jan 12 03:01:19.667: INFO: Pod "slave": Phase="Pending", Reason="", readiness=false. Elapsed: 3m36.195728181s Jan 12 03:01:19.667: INFO: The phase of Pod slave is Pending, waiting for it to be Running (with Ready = true) Jan 12 03:01:21.667: INFO: Pod "slave": Phase="Pending", Reason="", readiness=false. Elapsed: 3m38.195914768s Jan 12 03:01:21.667: INFO: The phase of Pod slave is Pending, waiting for it to be Running (with Ready = true) Jan 12 03:01:23.667: INFO: Pod "slave": Phase="Pending", Reason="", readiness=false. Elapsed: 3m40.195812658s Jan 12 03:01:23.667: INFO: The phase of Pod slave is Pending, waiting for it to be Running (with Ready = true) Jan 12 03:01:25.667: INFO: Pod "slave": Phase="Pending", Reason="", readiness=false. Elapsed: 3m42.196402355s Jan 12 03:01:25.667: INFO: The phase of Pod slave is Pending, waiting for it to be Running (with Ready = true) Jan 12 03:01:27.668: INFO: Pod "slave": Phase="Pending", Reason="", readiness=false. Elapsed: 3m44.196549824s Jan 12 03:01:27.668: INFO: The phase of Pod slave is Pending, waiting for it to be Running (with Ready = true) Jan 12 03:01:29.668: INFO: Pod "slave": Phase="Pending", Reason="", readiness=false. Elapsed: 3m46.196751187s Jan 12 03:01:29.668: INFO: The phase of Pod slave is Pending, waiting for it to be Running (with Ready = true) Jan 12 03:01:31.667: INFO: Pod "slave": Phase="Pending", Reason="", readiness=false. Elapsed: 3m48.196242831s Jan 12 03:01:31.667: INFO: The phase of Pod slave is Pending, waiting for it to be Running (with Ready = true) Jan 12 03:01:33.668: INFO: Pod "slave": Phase="Pending", Reason="", readiness=false. Elapsed: 3m50.196530817s Jan 12 03:01:33.668: INFO: The phase of Pod slave is Pending, waiting for it to be Running (with Ready = true) Jan 12 03:01:35.667: INFO: Pod "slave": Phase="Pending", Reason="", readiness=false. Elapsed: 3m52.196488904s Jan 12 03:01:35.667: INFO: The phase of Pod slave is Pending, waiting for it to be Running (with Ready = true) Jan 12 03:01:37.667: INFO: Pod "slave": Phase="Pending", Reason="", readiness=false. Elapsed: 3m54.196395368s Jan 12 03:01:37.667: INFO: The phase of Pod slave is Pending, waiting for it to be Running (with Ready = true) Jan 12 03:01:39.668: INFO: Pod "slave": Phase="Pending", Reason="", readiness=false. Elapsed: 3m56.196873707s Jan 12 03:01:39.668: INFO: The phase of Pod slave is Pending, waiting for it to be Running (with Ready = true) Jan 12 03:01:41.668: INFO: Pod "slave": Phase="Pending", Reason="", readiness=false. Elapsed: 3m58.196736865s Jan 12 03:01:41.668: INFO: The phase of Pod slave is Pending, waiting for it to be Running (with Ready = true) Jan 12 03:01:43.667: INFO: Pod "slave": Phase="Pending", Reason="", readiness=false. Elapsed: 4m0.196508052s Jan 12 03:01:43.668: INFO: The phase of Pod slave is Pending, waiting for it to be Running (with Ready = true) Jan 12 03:01:45.667: INFO: Pod "slave": Phase="Pending", Reason="", readiness=false. Elapsed: 4m2.196401351s Jan 12 03:01:45.667: INFO: The phase of Pod slave is Pending, waiting for it to be Running (with Ready = true) Jan 12 03:01:47.667: INFO: Pod "slave": Phase="Pending", Reason="", readiness=false. Elapsed: 4m4.19621984s Jan 12 03:01:47.667: INFO: The phase of Pod slave is Pending, waiting for it to be Running (with Ready = true) Jan 12 03:01:49.668: INFO: Pod "slave": Phase="Pending", Reason="", readiness=false. Elapsed: 4m6.196555996s Jan 12 03:01:49.668: INFO: The phase of Pod slave is Pending, waiting for it to be Running (with Ready = true) Jan 12 03:01:51.667: INFO: Pod "slave": Phase="Pending", Reason="", readiness=false. Elapsed: 4m8.196414452s Jan 12 03:01:51.667: INFO: The phase of Pod slave is Pending, waiting for it to be Running (with Ready = true) Jan 12 03:01:53.667: INFO: Pod "slave": Phase="Pending", Reason="", readiness=false. Elapsed: 4m10.195820948s Jan 12 03:01:53.667: INFO: The phase of Pod slave is Pending, waiting for it to be Running (with Ready = true) Jan 12 03:01:55.667: INFO: Pod "slave": Phase="Pending", Reason="", readiness=false. Elapsed: 4m12.196081144s Jan 12 03:01:55.667: INFO: The phase of Pod slave is Pending, waiting for it to be Running (with Ready = true) Jan 12 03:01:57.667: INFO: Pod "slave": Phase="Pending", Reason="", readiness=false. Elapsed: 4m14.19618634s Jan 12 03:01:57.667: INFO: The phase of Pod slave is Pending, waiting for it to be Running (with Ready = true) Jan 12 03:01:59.668: INFO: Pod "slave": Phase="Pending", Reason="", readiness=false. Elapsed: 4m16.197339094s Jan 12 03:01:59.668: INFO: The phase of Pod slave is Pending, waiting for it to be Running (with Ready = true) Jan 12 03:02:01.667: INFO: Pod "slave": Phase="Pending", Reason="", readiness=false. Elapsed: 4m18.196332465s Jan 12 03:02:01.667: INFO: The phase of Pod slave is Pending, waiting for it to be Running (with Ready = true) Jan 12 03:02:03.667: INFO: Pod "slave": Phase="Pending", Reason="", readiness=false. Elapsed: 4m20.195841346s Jan 12 03:02:03.667: INFO: The phase of Pod slave is Pending, waiting for it to be Running (with Ready = true) Jan 12 03:02:05.667: INFO: Pod "slave": Phase="Pending", Reason="", readiness=false. Elapsed: 4m22.195624888s Jan 12 03:02:05.667: INFO: The phase of Pod slave is Pending, waiting for it to be Running (with Ready = true) Jan 12 03:02:07.667: INFO: Pod "slave": Phase="Pending", Reason="", readiness=false. Elapsed: 4m24.195794988s Jan 12 03:02:07.667: INFO: The phase of Pod slave is Pending, waiting for it to be Running (with Ready = true) Jan 12 03:02:09.667: INFO: Pod "slave": Phase="Pending", Reason="", readiness=false. Elapsed: 4m26.195867587s Jan 12 03:02:09.667: INFO: The phase of Pod slave is Pending, waiting for it to be Running (with Ready = true) Jan 12 03:02:11.667: INFO: Pod "slave": Phase="Pending", Reason="", readiness=false. Elapsed: 4m28.196218066s Jan 12 03:02:11.667: INFO: The phase of Pod slave is Pending, waiting for it to be Running (with Ready = true) Jan 12 03:02:13.667: INFO: Pod "slave": Phase="Pending", Reason="", readiness=false. Elapsed: 4m30.196418509s Jan 12 03:02:13.667: INFO: The phase of Pod slave is Pending, waiting for it to be Running (with Ready = true) Jan 12 03:02:15.667: INFO: Pod "slave": Phase="Pending", Reason="", readiness=false. Elapsed: 4m32.196248563s Jan 12 03:02:15.667: INFO: The phase of Pod slave is Pending, waiting for it to be Running (with Ready = true) Jan 12 03:02:17.667: INFO: Pod "slave": Phase="Pending", Reason="", readiness=false. Elapsed: 4m34.195574178s Jan 12 03:02:17.667: INFO: The phase of Pod slave is Pending, waiting for it to be Running (with Ready = true) Jan 12 03:02:19.667: INFO: Pod "slave": Phase="Pending", Reason="", readiness=false. Elapsed: 4m36.196136564s Jan 12 03:02:19.667: INFO: The phase of Pod slave is Pending, waiting for it to be Running (with Ready = true) Jan 12 03:02:21.667: INFO: Pod "slave": Phase="Pending", Reason="", readiness=false. Elapsed: 4m38.196168256s Jan 12 03:02:21.667: INFO: The phase of Pod slave is Pending, waiting for it to be Running (with Ready = true) Jan 12 03:02:23.668: INFO: Pod "slave": Phase="Pending", Reason="", readiness=false. Elapsed: 4m40.196556715s Jan 12 03:02:23.668: INFO: The phase of Pod slave is Pending, waiting for it to be Running (with Ready = true) Jan 12 03:02:25.667: INFO: Pod "slave": Phase="Pending", Reason="", readiness=false. Elapsed: 4m42.196381217s Jan 12 03:02:25.667: INFO: The phase of Pod slave is Pending, waiting for it to be Running (with Ready = true) Jan 12 03:02:27.668: INFO: Pod "slave": Phase="Pending", Reason="", readiness=false. Elapsed: 4m44.196613672s Jan 12 03:02:27.668: INFO: The phase of Pod slave is Pending, waiting for it to be Running (with Ready = true) Jan 12 03:02:29.667: INFO: Pod "slave": Phase="Pending", Reason="", readiness=false. Elapsed: 4m46.195966879s Jan 12 03:02:29.667: INFO: The phase of Pod slave is Pending, waiting for it to be Running (with Ready = true) Jan 12 03:02:31.667: INFO: Pod "slave": Phase="Pending", Reason="", readiness=false. Elapsed: 4m48.196078145s Jan 12 03:02:31.667: INFO: The phase of Pod slave is Pending, waiting for it to be Running (with Ready = true) Jan 12 03:02:33.667: INFO: Pod "slave": Phase="Pending", Reason="", readiness=false. Elapsed: 4m50.195632358s Jan 12 03:02:33.667: INFO: The phase of Pod slave is Pending, waiting for it to be Running (with Ready = true) Jan 12 03:02:35.667: INFO: Pod "slave": Phase="Pending", Reason="", readiness=false. Elapsed: 4m52.196229342s Jan 12 03:02:35.667: INFO: The phase of Pod slave is Pending, waiting for it to be Running (with Ready = true) Jan 12 03:02:37.667: INFO: Pod "slave": Phase="Pending", Reason="", readiness=false. Elapsed: 4m54.196274531s Jan 12 03:02:37.667: INFO: The phase of Pod slave is Pending, waiting for it to be Running (with Ready = true) Jan 12 03:02:39.667: INFO: Pod "slave": Phase="Pending", Reason="", readiness=false. Elapsed: 4m56.196342779s Jan 12 03:02:39.667: INFO: The phase of Pod slave is Pending, waiting for it to be Running (with Ready = true) Jan 12 03:02:41.668: INFO: Pod "slave": Phase="Pending", Reason="", readiness=false. Elapsed: 4m58.196562625s Jan 12 03:02:41.668: INFO: The phase of Pod slave is Pending, waiting for it to be Running (with Ready = true) Jan 12 03:02:43.667: INFO: Pod "slave": Phase="Pending", Reason="", readiness=false. Elapsed: 5m0.196113042s Jan 12 03:02:43.667: INFO: The phase of Pod slave is Pending, waiting for it to be Running (with Ready = true) Jan 12 03:02:43.765: INFO: Pod "slave": Phase="Pending", Reason="", readiness=false. Elapsed: 5m0.293794746s Jan 12 03:02:43.765: INFO: The phase of Pod slave is Pending, waiting for it to be Running (with Ready = true) Jan 12 03:02:43.766: INFO: Unexpected error: <*pod.timeoutError | 0xc004304d50>: { msg: "timed out while waiting for pod mount-propagation-8493/slave to be running and ready", observedObjects: [ <*v1.Pod | 0xc004732000>{ TypeMeta: {Kind: "", APIVersion: ""}, ObjectMeta: { Name: "slave", GenerateName: "", Namespace: "mount-propagation-8493", SelfLink: "", UID: "186e6e67-bcb9-44fd-ae24-dfea506daf65", ResourceVersion: "49482", Generation: 0, CreationTimestamp: { Time: { wall: 0, ext: 63809089063, loc: { name: "Local", zone: [ {name: "UTC", offset: 0, isDST: false}, ], tx: [ { when: -576460752303423488, index: 0, isstd: false, isutc: false, }, ], extend: "UTC0", cacheStart: 9223372036854775807, cacheEnd: 9223372036854775807, cacheZone: {name: "UTC", offset: 0, isDST: false}, }, }, }, DeletionTimestamp: nil, DeletionGracePeriodSeconds: nil, Labels: nil, Annotations: nil, OwnerReferences: nil, Finalizers: nil, ManagedFields: [ { Manager: "e2e.test", Operation: "Update", APIVersion: "v1", Time: { Time: { wall: 0, ext: 63809089063, loc: { name: "Local", zone: [...], tx: [...], extend: "UTC0", cacheStart: 9223372036854775807, cacheEnd: 9223372036854775807, cacheZone: {name: ..., offset: ..., isDST: ...}, }, }, }, FieldsType: "FieldsV1", FieldsV1: { Raw: "{\"f:spec\":{\"f:containers\":{\"k:{\\\"name\\\":\\\"cntr\\\"}\":{\".\":{},\"f:command\":{},\"f:image\":{},\"f:imagePullPolicy\":{},\"f:name\":{},\"f:resources\":{},\"f:securityContext\":{\".\":{},\"f:privileged\":{}},\"f:terminationMessagePath\":{},\"f:terminationMessagePolicy\":{},\"f:volumeMounts\":{\".\":{},\"k:{\\\"mountPath\\\":\\\"/mnt/test\\\"}\":{\".\":{},\"f:mountPath\":{},\"f:mountPropagation\":{},\"f:name\":{}}}}},\"f:dnsPolicy\":{},\"f:enableServiceLinks\":{},\"f:nodeName\":{},\"f:restartPolicy\":{},\"f:schedulerName\":{},\"f:securityContext\":{},\"f:terminationGracePeriodSeconds\":{},\"f:volumes\":{\".\":{},\"k:{\\\"name\\\":\\\"host\\\"}\":{\".\":{},\"f:hostPath\":{\".\":{},\"f:path\":{},\"f:type\":{}},\"f:name\":{}}}}}", }, Subresource: "", }, { Manager: "kubelet", Operation: "Update", APIVersion: "v1", Time: { Time: { wall: 0, ext: 63809089063, loc: { name: "Local", zone: [...], tx: [...], extend: "UTC0", cacheStart: 9223372036854775807, cacheEnd: 9223372036854775807, cacheZone: {name: ..., offset: ..., isDST: ...}, }, }, }, FieldsType: "FieldsV1", FieldsV1: { Raw: "{\"f:status\":{\"f:conditions\":{\".\":{},\"k:{\\\"type\\\":\\\"ContainersReady\\\"}\":{\".\":{},\"f:lastProbeTime\":{},\"f:lastTransitionTime\":{},\"f:message\":{},\"f:reason\":{},\"f:status\":{},\"f:type\":{}},\"k:{\\\"type\\\":\\\"Initialized\\\"}\":{\".\":{},\"f:lastProbeTime\":{},\"f:lastTransitionTime\":{},\"f:status\":{},\"f:type\":{}},\"k:{\\\"type\\\":\\\"PodScheduled\\\"}\":{\".\":{},\"f:lastProbeTime\":{},\"f:lastTransitionTime\":{},\"f:status\":{},\"f:type\":{}},\"k:{\\\"type\\\":\\\"Ready\\\"}\":{\".\":{},\"f:lastProbeTime\":{},\"f:lastTransitionTime\":{},\"f:message\":{},\"f:reason\":{},\"f:status\":{},\"f:type\":{}}},\"f:containerStatuses\":{},\"f:hostIP\":{},\"f:startTime\":{}}}", }, Subresource: "status", }, ], }, Spec: { Volumes: [ { Name: "host", VolumeSource: { HostPath: { Path: "/var/lib/kubelet/mount-propagation-8493", Type: "", }, EmptyDir: nil, GCEPersistentDisk: nil, AWSElasticBlockStore: nil, GitRepo: nil, Secret: nil, NFS: nil, ISCSI: nil, Glusterfs: nil, PersistentVolumeClaim: nil, RBD: nil, FlexVolume: nil, Cinder: nil, CephFS: nil, Flocker: nil, DownwardAPI: nil, FC: nil, AzureFile: nil, ConfigMap: nil, VsphereVolume: nil, Quobyte: nil, AzureDisk: nil, PhotonPersistentDisk: nil, Projected: nil, PortworxVolume: nil, ScaleIO: nil, StorageOS: nil, CSI: nil, Ephemeral: nil, }, }, { Name: "kube-api-access-ccb7l", VolumeSource: { HostPath: nil, EmptyDir: nil, GCEPersistentDisk: nil, AWSElasticBlockStore: nil, GitRepo: nil, ... Gomega truncated this representation as it exceeds 'format.MaxLength'. Consider having the object provide a custom 'GomegaStringer' representation or adjust the parameters in Gomega's 'format' package. Learn more here: https://onsi.github.io/gomega/#adjusting-output Jan 12 03:02:43.766: FAIL: timed out while waiting for pod mount-propagation-8493/slave to be running and ready Full Stack Trace k8s.io/kubernetes/test/e2e/framework/pod.(*PodClient).CreateSync(0xc0019757a0, 0x5?) test/e2e/framework/pod/pod_client.go:107 +0x94 k8s.io/kubernetes/test/e2e/node.glob..func7.1() test/e2e/node/mount_propagation.go:120 +0x2b9 Jan 12 03:02:43.868: INFO: Waiting up to 5m0s for pod "hostexec-i-02c6337a529a94d2d-ztdnd" in namespace "mount-propagation-8493" to be "running" Jan 12 03:02:43.966: INFO: Pod "hostexec-i-02c6337a529a94d2d-ztdnd": Phase="Pending", Reason="", readiness=false. Elapsed: 97.858453ms Jan 12 03:02:46.065: INFO: Pod "hostexec-i-02c6337a529a94d2d-ztdnd": Phase="Running", Reason="", readiness=true. Elapsed: 2.196948196s Jan 12 03:02:46.065: INFO: Pod "hostexec-i-02c6337a529a94d2d-ztdnd" satisfied condition "running" Jan 12 03:02:46.065: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -rf "/var/lib/kubelet/mount-propagation-8493"] Namespace:mount-propagation-8493 PodName:hostexec-i-02c6337a529a94d2d-ztdnd ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jan 12 03:02:46.065: INFO: >>> kubeConfig: /root/.kube/config Jan 12 03:02:46.066: INFO: ExecWithOptions: Clientset creation Jan 12 03:02:46.066: INFO: ExecWithOptions: execute(POST https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/mount-propagation-8493/pods/hostexec-i-02c6337a529a94d2d-ztdnd/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=rm+-rf+%22%2Fvar%2Flib%2Fkubelet%2Fmount-propagation-8493%22&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) �[1mSTEP:�[0m Deleting pod hostexec-i-02c6337a529a94d2d-ztdnd in namespace mount-propagation-8493 �[38;5;243m01/12/23 03:02:46.747�[0m [AfterEach] [sig-node] Mount propagation test/e2e/framework/node/init/init.go:32 Jan 12 03:02:46.855: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-node] Mount propagation test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-node] Mount propagation dump namespaces | framework.go:196 �[1mSTEP:�[0m dump namespace information after failure �[38;5;243m01/12/23 03:02:46.954�[0m �[1mSTEP:�[0m Collecting events from namespace "mount-propagation-8493". �[38;5;243m01/12/23 03:02:46.955�[0m �[1mSTEP:�[0m Found 23 events. �[38;5;243m01/12/23 03:02:47.053�[0m Jan 12 03:02:47.053: INFO: At 2023-01-12 02:56:27 +0000 UTC - event for master: {kubelet i-02c6337a529a94d2d} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "7cfe68576fb8615aa3e4ababb4bf506d5f2b7fd8d5100058642d7f4dde7f0c9d": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 12 03:02:47.053: INFO: At 2023-01-12 02:56:41 +0000 UTC - event for master: {kubelet i-02c6337a529a94d2d} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "fd5511606f918d93db16f59e688c3566cbbe85da79e0511005c1bd28eb51389e": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 12 03:02:47.053: INFO: At 2023-01-12 02:56:56 +0000 UTC - event for master: {kubelet i-02c6337a529a94d2d} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "4b487b54c259ce283b05c245e261c7e71932741ca62efbfa067977a74c3f7e73": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 12 03:02:47.053: INFO: At 2023-01-12 02:57:09 +0000 UTC - event for master: {kubelet i-02c6337a529a94d2d} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "37bae4947ed6b09e48ce8f7d88037520d32190b5efaf2dbe21976d68a0998089": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 12 03:02:47.053: INFO: At 2023-01-12 02:57:24 +0000 UTC - event for master: {kubelet i-02c6337a529a94d2d} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "ac933365818c0c4fb9aeb15aad13912c30ae4172b9224de65c018d8e8a16399c": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 12 03:02:47.053: INFO: At 2023-01-12 02:57:40 +0000 UTC - event for master: {kubelet i-02c6337a529a94d2d} Pulled: Container image "registry.k8s.io/e2e-test-images/busybox:1.29-4" already present on machine Jan 12 03:02:47.053: INFO: At 2023-01-12 02:57:40 +0000 UTC - event for master: {kubelet i-02c6337a529a94d2d} Created: Created container cntr Jan 12 03:02:47.053: INFO: At 2023-01-12 02:57:40 +0000 UTC - event for master: {kubelet i-02c6337a529a94d2d} Started: Started container cntr Jan 12 03:02:47.053: INFO: At 2023-01-12 02:57:43 +0000 UTC - event for slave: {kubelet i-02c6337a529a94d2d} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "af242529fcc9a4c566878c0da4f2e4c21e5404d1466e3e099e6d9c40b9d45c3b": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 12 03:02:47.053: INFO: At 2023-01-12 02:57:55 +0000 UTC - event for slave: {kubelet i-02c6337a529a94d2d} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "ab14fb833455c2d3776358972b416a247e194114d4b7f401ef19976653f5392c": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 12 03:02:47.053: INFO: At 2023-01-12 02:58:08 +0000 UTC - event for slave: {kubelet i-02c6337a529a94d2d} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "ef1d39865dde06e7c900a12f9e07102671108ab959deed3bddacd05d05d10930": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 12 03:02:47.053: INFO: At 2023-01-12 02:58:21 +0000 UTC - event for slave: {kubelet i-02c6337a529a94d2d} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "b4dec727a8d8e6991b99fa20376194e0dd9efa2b1f535ff5b6521b8175707a5e": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 12 03:02:47.053: INFO: At 2023-01-12 02:58:32 +0000 UTC - event for slave: {kubelet i-02c6337a529a94d2d} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "d0e506c98dd69078937a31bb496007447f56db191544980d567db2f3455776fb": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 12 03:02:47.053: INFO: At 2023-01-12 02:58:45 +0000 UTC - event for slave: {kubelet i-02c6337a529a94d2d} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "472a82fe484f7d1e7def908f222dbf44f92c76bbbfc878141996f3b2c30df146": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 12 03:02:47.053: INFO: At 2023-01-12 02:59:00 +0000 UTC - event for slave: {kubelet i-02c6337a529a94d2d} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "2926730490cea1a2349fa78657965b48b0f6a044b9e92f7470e2f03d96587ea7": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 12 03:02:47.053: INFO: At 2023-01-12 02:59:12 +0000 UTC - event for slave: {kubelet i-02c6337a529a94d2d} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "ef4e5388bedb22363aaa43d2ee62efa6637eabd5666739b1bc6347588e35a29c": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 12 03:02:47.053: INFO: At 2023-01-12 02:59:27 +0000 UTC - event for slave: {kubelet i-02c6337a529a94d2d} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "2c2ac99e6ed820229bf10a0a8b82423f8d39b8b88318df98b9cdbf62429c0143": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 12 03:02:47.053: INFO: At 2023-01-12 02:59:39 +0000 UTC - event for slave: {kubelet i-02c6337a529a94d2d} FailedCreatePodSandBox: (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "063a626772b92f11666e66fa9a6e46d4a076ae0625e46fdeb4986781ab89b7e4": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 12 03:02:47.053: INFO: At 2023-01-12 03:02:43 +0000 UTC - event for hostexec-i-02c6337a529a94d2d-ztdnd: {default-scheduler } Scheduled: Successfully assigned mount-propagation-8493/hostexec-i-02c6337a529a94d2d-ztdnd to i-02c6337a529a94d2d Jan 12 03:02:47.053: INFO: At 2023-01-12 03:02:44 +0000 UTC - event for hostexec-i-02c6337a529a94d2d-ztdnd: {kubelet i-02c6337a529a94d2d} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.43" already present on machine Jan 12 03:02:47.053: INFO: At 2023-01-12 03:02:44 +0000 UTC - event for hostexec-i-02c6337a529a94d2d-ztdnd: {kubelet i-02c6337a529a94d2d} Created: Created container agnhost-container Jan 12 03:02:47.053: INFO: At 2023-01-12 03:02:44 +0000 UTC - event for hostexec-i-02c6337a529a94d2d-ztdnd: {kubelet i-02c6337a529a94d2d} Started: Started container agnhost-container Jan 12 03:02:47.053: INFO: At 2023-01-12 03:02:46 +0000 UTC - event for hostexec-i-02c6337a529a94d2d-ztdnd: {kubelet i-02c6337a529a94d2d} Killing: Stopping container agnhost-container Jan 12 03:02:47.151: INFO: POD NODE PHASE GRACE CONDITIONS Jan 12 03:02:47.151: INFO: master i-02c6337a529a94d2d Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 02:56:27 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 02:57:41 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 02:57:41 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 02:56:27 +0000 UTC }] Jan 12 03:02:47.151: INFO: slave i-02c6337a529a94d2d Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 02:57:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 02:57:43 +0000 UTC ContainersNotReady containers with unready status: [cntr]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 02:57:43 +0000 UTC ContainersNotReady containers with unready status: [cntr]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 02:57:43 +0000 UTC }] Jan 12 03:02:47.151: INFO: Jan 12 03:02:47.351: INFO: Unable to fetch mount-propagation-8493/slave/cntr logs: the server rejected our request for an unknown reason (get pods slave) Jan 12 03:02:47.451: INFO: Logging node info for node i-027f21d552ba8db57 Jan 12 03:02:47.549: INFO: Node Info: &Node{ObjectMeta:{i-027f21d552ba8db57 afc72005-0dff-4853-9ebc-4181b449b438 50565 0 2023-01-12 02:34:45 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:eu-west-2 failure-domain.beta.kubernetes.io/zone:eu-west-2a kubelet_cleanup:true kubernetes.io/arch:amd64 kubernetes.io/hostname:i-027f21d552ba8db57 kubernetes.io/os:linux node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:eu-west-2a topology.hostpath.csi/node:i-027f21d552ba8db57 topology.kubernetes.io/region:eu-west-2 topology.kubernetes.io/zone:eu-west-2a] map[alpha.kubernetes.io/provided-node-ip:172.20.47.82 csi.volume.kubernetes.io/nodeid:{"ebs.csi.aws.com":"i-027f21d552ba8db57"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{aws-cloud-controller-manager Update v1 2023-01-12 02:34:45 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:beta.kubernetes.io/instance-type":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {aws-cloud-controller-manager Update v1 2023-01-12 02:34:45 +0000 UTC FieldsV1 {"f:status":{"f:addresses":{"k:{\"type\":\"ExternalDNS\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"ExternalIP\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"Hostname\"}":{"f:address":{}},"k:{\"type\":\"InternalDNS\"}":{".":{},"f:address":{},"f:type":{}}}}} status} {kops-controller Update v1 2023-01-12 02:34:45 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:node-role.kubernetes.io/node":{}}}} } {kube-controller-manager Update v1 2023-01-12 02:34:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.2.0/24\"":{}}}} } {kubelet Update v1 2023-01-12 02:34:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:alpha.kubernetes.io/provided-node-ip":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}}} } {e2e.test Update v1 2023-01-12 02:56:00 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubelet_cleanup":{}}}} } {kubelet Update v1 2023-01-12 03:02:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.2.0/24,DoNotUseExternalID:,ProviderID:aws:///eu-west-2a/i-027f21d552ba8db57,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{50478186496 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4078190592 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{45430367772 0} {<nil>} 45430367772 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3973332992 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-12 03:02:29 +0000 UTC,LastTransitionTime:2023-01-12 02:34:45 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-12 03:02:29 +0000 UTC,LastTransitionTime:2023-01-12 02:34:45 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-12 03:02:29 +0000 UTC,LastTransitionTime:2023-01-12 02:34:45 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-12 03:02:29 +0000 UTC,LastTransitionTime:2023-01-12 02:34:58 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.47.82,},NodeAddress{Type:ExternalIP,Address:18.170.218.178,},NodeAddress{Type:InternalDNS,Address:i-027f21d552ba8db57.eu-west-2.compute.internal,},NodeAddress{Type:Hostname,Address:i-027f21d552ba8db57.eu-west-2.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-18-170-218-178.eu-west-2.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec28b8d29728f0c796d7d7c40a269a36,SystemUUID:ec28b8d2-9728-f0c7-96d7-d7c40a269a36,BootID:98da75b1-fa84-4aa0-b022-7917ae65922c,KernelVersion:4.19.0-23-cloud-amd64,OSImage:Debian GNU/Linux 10 (buster),ContainerRuntimeVersion:containerd://1.6.15,KubeletVersion:v1.26.0,KubeProxyVersion:v1.26.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5 quay.io/cilium/cilium:v1.12.5],SizeBytes:166719855,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c registry.k8s.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.26.0],SizeBytes:67205320,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nautilus@sha256:80ba6c8c44f9623f06e868a1aa66026c8ec438ad814f9ec95e9333b415fe3550 registry.k8s.io/e2e-test-images/nautilus:1.7],SizeBytes:49641698,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:3fe7acf013d1264ffded116b80a73dc129a449b0fccdb8d21af8279f2233f36e registry.k8s.io/e2e-test-images/httpd:2.4.39-4],SizeBytes:41901587,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:f0c5de192d832e7c1daa6580d4a62e8fa6fc8eabc0917ae4cb7ed4d15e95b59e registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.1],SizeBytes:29725845,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/sample-apiserver@sha256:8d70890151aa5d096f331cb9da1b9cd5be0412b7363fe67b5c3befdcaa2a28d0 registry.k8s.io/e2e-test-images/sample-apiserver:1.17.7],SizeBytes:25667066,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:8240918,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:db048754ae68ae337d8fa96494c96d2a1204c3320f5dcf7e8e71085adec85da6 registry.k8s.io/e2e-test-images/nginx:1.15-4],SizeBytes:7000082,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:5c99cf6a02adda929b10321dbf4ecfa00d87be9ba4fb456006237d530ab4baa1 registry.k8s.io/e2e-test-images/nginx:1.14-4],SizeBytes:6978614,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac registry.k8s.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 12 03:02:47.549: INFO: Logging kubelet events for node i-027f21d552ba8db57 Jan 12 03:02:47.649: INFO: Logging pods the kubelet thinks is on node i-027f21d552ba8db57 Jan 12 03:02:47.756: INFO: cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-rtg9v started at 2023-01-12 02:56:01 +0000 UTC (0+1 container statuses recorded) Jan 12 03:02:47.756: INFO: Container cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 ready: true, restart count 0 Jan 12 03:02:47.756: INFO: cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-zncrz started at 2023-01-12 02:56:01 +0000 UTC (0+1 container statuses recorded) Jan 12 03:02:47.756: INFO: Container cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 ready: true, restart count 0 Jan 12 03:02:47.756: INFO: cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-bgpgg started at 2023-01-12 02:56:01 +0000 UTC (0+1 container statuses recorded) Jan 12 03:02:47.756: INFO: Container cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 ready: true, restart count 0 Jan 12 03:02:47.756: INFO: cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-nlhdv started at 2023-01-12 02:56:01 +0000 UTC (0+1 container statuses recorded) Jan 12 03:02:47.756: INFO: Container cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 ready: true, restart count 0 Jan 12 03:02:47.756: INFO: cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-hr497 started at 2023-01-12 02:56:01 +0000 UTC (0+1 container statuses recorded) Jan 12 03:02:47.756: INFO: Container cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 ready: true, restart count 0 Jan 12 03:02:47.756: INFO: ebs-csi-node-tkggt started at 2023-01-12 02:34:46 +0000 UTC (0+3 container statuses recorded) Jan 12 03:02:47.756: INFO: Container ebs-plugin ready: true, restart count 0 Jan 12 03:02:47.756: INFO: Container liveness-probe ready: true, restart count 0 Jan 12 03:02:47.756: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 12 03:02:47.756: INFO: cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-wdxzv started at 2023-01-12 02:56:00 +0000 UTC (0+1 container statuses recorded) Jan 12 03:02:47.756: INFO: Container cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 ready: true, restart count 0 Jan 12 03:02:47.756: INFO: cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-btsbw started at 2023-01-12 02:56:01 +0000 UTC (0+1 container statuses recorded) Jan 12 03:02:47.756: INFO: Container cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 ready: true, restart count 0 Jan 12 03:02:47.757: INFO: cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-5zq9g started at 2023-01-12 02:56:01 +0000 UTC (0+1 container statuses recorded) Jan 12 03:02:47.757: INFO: Container cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 ready: true, restart count 0 Jan 12 03:02:47.757: INFO: cilium-9zwcj started at 2023-01-12 02:34:46 +0000 UTC (1+1 container statuses recorded) Jan 12 03:02:47.757: INFO: Init container clean-cilium-state ready: true, restart count 0 Jan 12 03:02:47.757: INFO: Container cilium-agent ready: true, restart count 0 Jan 12 03:02:47.757: INFO: cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-v7pmj started at 2023-01-12 02:56:01 +0000 UTC (0+1 container statuses recorded) Jan 12 03:02:47.757: INFO: Container cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 ready: true, restart count 0 Jan 12 03:02:47.757: INFO: cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-nf9x6 started at 2023-01-12 02:56:01 +0000 UTC (0+1 container statuses recorded) Jan 12 03:02:47.757: INFO: Container cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 ready: true, restart count 0 Jan 12 03:02:48.204: INFO: Latency metrics for node i-027f21d552ba8db57 Jan 12 03:02:48.204: INFO: Logging node info for node i-02c6337a529a94d2d Jan 12 03:02:48.302: INFO: Node Info: &Node{ObjectMeta:{i-02c6337a529a94d2d 09fd98e2-daf3-4206-adbe-9bde1cb707ea 50574 0 2023-01-12 02:34:44 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:eu-west-2 failure-domain.beta.kubernetes.io/zone:eu-west-2a kubelet_cleanup:true kubernetes.io/arch:amd64 kubernetes.io/hostname:i-02c6337a529a94d2d kubernetes.io/os:linux node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:eu-west-2a topology.hostpath.csi/node:i-02c6337a529a94d2d topology.kubernetes.io/region:eu-west-2 topology.kubernetes.io/zone:eu-west-2a] map[alpha.kubernetes.io/provided-node-ip:172.20.40.254 csi.volume.kubernetes.io/nodeid:{"ebs.csi.aws.com":"i-02c6337a529a94d2d"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{aws-cloud-controller-manager Update v1 2023-01-12 02:34:44 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:beta.kubernetes.io/instance-type":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {aws-cloud-controller-manager Update v1 2023-01-12 02:34:44 +0000 UTC FieldsV1 {"f:status":{"f:addresses":{"k:{\"type\":\"ExternalDNS\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"ExternalIP\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"Hostname\"}":{"f:address":{}},"k:{\"type\":\"InternalDNS\"}":{".":{},"f:address":{},"f:type":{}}}}} status} {kops-controller Update v1 2023-01-12 02:34:44 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:node-role.kubernetes.io/node":{}}}} } {kube-controller-manager Update v1 2023-01-12 02:34:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.1.0/24\"":{}}}} } {kubelet Update v1 2023-01-12 02:34:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:alpha.kubernetes.io/provided-node-ip":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}}} } {e2e.test Update v1 2023-01-12 02:56:00 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubelet_cleanup":{}}}} } {kubelet Update v1 2023-01-12 03:02:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.1.0/24,DoNotUseExternalID:,ProviderID:aws:///eu-west-2a/i-02c6337a529a94d2d,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{50478186496 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4078190592 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{45430367772 0} {<nil>} 45430367772 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3973332992 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-12 03:02:31 +0000 UTC,LastTransitionTime:2023-01-12 02:34:44 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-12 03:02:31 +0000 UTC,LastTransitionTime:2023-01-12 02:34:44 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-12 03:02:31 +0000 UTC,LastTransitionTime:2023-01-12 02:34:44 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-12 03:02:31 +0000 UTC,LastTransitionTime:2023-01-12 02:34:57 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.40.254,},NodeAddress{Type:ExternalIP,Address:13.40.115.70,},NodeAddress{Type:InternalDNS,Address:i-02c6337a529a94d2d.eu-west-2.compute.internal,},NodeAddress{Type:Hostname,Address:i-02c6337a529a94d2d.eu-west-2.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-13-40-115-70.eu-west-2.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec26b04c9710d692801011afeecf33f6,SystemUUID:ec26b04c-9710-d692-8010-11afeecf33f6,BootID:109b6331-8cb7-41df-87ac-5d30a4302c3f,KernelVersion:4.19.0-23-cloud-amd64,OSImage:Debian GNU/Linux 10 (buster),ContainerRuntimeVersion:containerd://1.6.15,KubeletVersion:v1.26.0,KubeProxyVersion:v1.26.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5 quay.io/cilium/cilium:v1.12.5],SizeBytes:166719855,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c registry.k8s.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.26.0],SizeBytes:67205320,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nautilus@sha256:80ba6c8c44f9623f06e868a1aa66026c8ec438ad814f9ec95e9333b415fe3550 registry.k8s.io/e2e-test-images/nautilus:1.7],SizeBytes:49641698,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:3fe7acf013d1264ffded116b80a73dc129a449b0fccdb8d21af8279f2233f36e registry.k8s.io/e2e-test-images/httpd:2.4.39-4],SizeBytes:41901587,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:f0c5de192d832e7c1daa6580d4a62e8fa6fc8eabc0917ae4cb7ed4d15e95b59e registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.1],SizeBytes:29725845,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:68d396900aeaa072c1f27289485fdac29834045a6f3ffe369bf389d830ef572d registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.6],SizeBytes:20293261,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf registry.k8s.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:18651485,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:8240918,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:db048754ae68ae337d8fa96494c96d2a1204c3320f5dcf7e8e71085adec85da6 registry.k8s.io/e2e-test-images/nginx:1.15-4],SizeBytes:7000082,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:5c99cf6a02adda929b10321dbf4ecfa00d87be9ba4fb456006237d530ab4baa1 registry.k8s.io/e2e-test-images/nginx:1.14-4],SizeBytes:6978614,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:2110879,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 12 03:02:48.303: INFO: Logging kubelet events for node i-02c6337a529a94d2d Jan 12 03:02:48.404: INFO: Logging pods the kubelet thinks is on node i-02c6337a529a94d2d Jan 12 03:02:48.508: INFO: cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-929h5 started at 2023-01-12 02:56:01 +0000 UTC (0+1 container statuses recorded) Jan 12 03:02:48.508: INFO: Container cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 ready: true, restart count 0 Jan 12 03:02:48.508: INFO: cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-brxkt started at 2023-01-12 02:56:01 +0000 UTC (0+1 container statuses recorded) Jan 12 03:02:48.508: INFO: Container cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 ready: true, restart count 0 Jan 12 03:02:48.508: INFO: cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-2tvd8 started at 2023-01-12 02:56:01 +0000 UTC (0+1 container statuses recorded) Jan 12 03:02:48.508: INFO: Container cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 ready: true, restart count 0 Jan 12 03:02:48.508: INFO: ebs-csi-node-znqk8 started at 2023-01-12 02:34:45 +0000 UTC (0+3 container statuses recorded) Jan 12 03:02:48.508: INFO: Container ebs-plugin ready: true, restart count 0 Jan 12 03:02:48.508: INFO: Container liveness-probe ready: true, restart count 0 Jan 12 03:02:48.508: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 12 03:02:48.508: INFO: cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-h4b5b started at 2023-01-12 02:56:01 +0000 UTC (0+1 container statuses recorded) Jan 12 03:02:48.508: INFO: Container cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 ready: true, restart count 0 Jan 12 03:02:48.508: INFO: cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-w2vt6 started at 2023-01-12 02:56:01 +0000 UTC (0+1 container statuses recorded) Jan 12 03:02:48.508: INFO: Container cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 ready: false, restart count 0 Jan 12 03:02:48.508: INFO: coredns-559769c974-vfctb started at 2023-01-12 02:34:58 +0000 UTC (0+1 container statuses recorded) Jan 12 03:02:48.508: INFO: Container coredns ready: true, restart count 0 Jan 12 03:02:48.508: INFO: cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-nqd7f started at 2023-01-12 02:56:01 +0000 UTC (0+1 container statuses recorded) Jan 12 03:02:48.508: INFO: Container cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 ready: true, restart count 0 Jan 12 03:02:48.508: INFO: cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-rkv68 started at 2023-01-12 02:56:01 +0000 UTC (0+1 container statuses recorded) Jan 12 03:02:48.508: INFO: Container cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 ready: true, restart count 0 Jan 12 03:02:48.508: INFO: coredns-autoscaler-7cb5c5b969-8hxnk started at 2023-01-12 02:34:58 +0000 UTC (0+1 container statuses recorded) Jan 12 03:02:48.508: INFO: Container autoscaler ready: true, restart count 0 Jan 12 03:02:48.508: INFO: slave started at 2023-01-12 02:57:43 +0000 UTC (0+1 container statuses recorded) Jan 12 03:02:48.508: INFO: Container cntr ready: false, restart count 0 Jan 12 03:02:48.508: INFO: cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-7n9cd started at 2023-01-12 02:56:01 +0000 UTC (0+1 container statuses recorded) Jan 12 03:02:48.508: INFO: Container cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 ready: true, restart count 0 Jan 12 03:02:48.508: INFO: cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-mh7wv started at 2023-01-12 02:56:01 +0000 UTC (0+1 container statuses recorded) Jan 12 03:02:48.508: INFO: Container cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 ready: true, restart count 0 Jan 12 03:02:48.508: INFO: master started at 2023-01-12 02:56:27 +0000 UTC (0+1 container statuses recorded) Jan 12 03:02:48.508: INFO: Container cntr ready: true, restart count 0 Jan 12 03:02:48.508: INFO: cilium-8zgmk started at 2023-01-12 02:34:45 +0000 UTC (1+1 container statuses recorded) Jan 12 03:02:48.508: INFO: Init container clean-cilium-state ready: true, restart count 0 Jan 12 03:02:48.508: INFO: Container cilium-agent ready: true, restart count 0 Jan 12 03:02:48.508: INFO: cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-djm4z started at 2023-01-12 02:56:01 +0000 UTC (0+1 container statuses recorded) Jan 12 03:02:48.508: INFO: Container cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 ready: true, restart count 0 Jan 12 03:02:48.920: INFO: Latency metrics for node i-02c6337a529a94d2d Jan 12 03:02:48.920: INFO: Logging node info for node i-0747e3c6943e05486 Jan 12 03:02:49.018: INFO: Node Info: &Node{ObjectMeta:{i-0747e3c6943e05486 94025611-bb76-47e9-8f7a-10abc64c0736 50573 0 2023-01-12 02:34:47 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:eu-west-2 failure-domain.beta.kubernetes.io/zone:eu-west-2a io.kubernetes.storage.mock/node:some-mock-node kubelet_cleanup:true kubernetes.io/arch:amd64 kubernetes.io/hostname:i-0747e3c6943e05486 kubernetes.io/os:linux node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:eu-west-2a topology.hostpath.csi/node:i-0747e3c6943e05486 topology.kubernetes.io/region:eu-west-2 topology.kubernetes.io/zone:eu-west-2a] map[alpha.kubernetes.io/provided-node-ip:172.20.53.236 csi.volume.kubernetes.io/nodeid:{"ebs.csi.aws.com":"i-0747e3c6943e05486"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{aws-cloud-controller-manager Update v1 2023-01-12 02:34:47 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:beta.kubernetes.io/instance-type":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {aws-cloud-controller-manager Update v1 2023-01-12 02:34:47 +0000 UTC FieldsV1 {"f:status":{"f:addresses":{"k:{\"type\":\"ExternalDNS\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"ExternalIP\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"Hostname\"}":{"f:address":{}},"k:{\"type\":\"InternalDNS\"}":{".":{},"f:address":{},"f:type":{}}}}} status} {kops-controller Update v1 2023-01-12 02:34:47 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:node-role.kubernetes.io/node":{}}}} } {kube-controller-manager Update v1 2023-01-12 02:34:47 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.4.0/24\"":{}}}} } {kubelet Update v1 2023-01-12 02:34:47 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:alpha.kubernetes.io/provided-node-ip":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}}} } {e2e.test Update v1 2023-01-12 02:56:00 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubelet_cleanup":{}}}} } {kubelet Update v1 2023-01-12 03:02:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:io.kubernetes.storage.mock/node":{},"f:topology.ebs.csi.aws.com/zone":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.4.0/24,DoNotUseExternalID:,ProviderID:aws:///eu-west-2a/i-0747e3c6943e05486,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{50478186496 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4078190592 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{45430367772 0} {<nil>} 45430367772 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3973332992 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-12 03:02:30 +0000 UTC,LastTransitionTime:2023-01-12 02:34:47 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-12 03:02:30 +0000 UTC,LastTransitionTime:2023-01-12 02:34:47 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-12 03:02:30 +0000 UTC,LastTransitionTime:2023-01-12 02:34:47 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-12 03:02:30 +0000 UTC,LastTransitionTime:2023-01-12 02:35:00 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.53.236,},NodeAddress{Type:ExternalIP,Address:18.130.192.62,},NodeAddress{Type:InternalDNS,Address:i-0747e3c6943e05486.eu-west-2.compute.internal,},NodeAddress{Type:Hostname,Address:i-0747e3c6943e05486.eu-west-2.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-18-130-192-62.eu-west-2.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec27fc6850706b89bd5a8bfef3f05abd,SystemUUID:ec27fc68-5070-6b89-bd5a-8bfef3f05abd,BootID:89fc9100-39de-47af-b873-b81034bcd3ed,KernelVersion:4.19.0-23-cloud-amd64,OSImage:Debian GNU/Linux 10 (buster),ContainerRuntimeVersion:containerd://1.6.15,KubeletVersion:v1.26.0,KubeProxyVersion:v1.26.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5 quay.io/cilium/cilium:v1.12.5],SizeBytes:166719855,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c registry.k8s.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.26.0],SizeBytes:67205320,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nautilus@sha256:80ba6c8c44f9623f06e868a1aa66026c8ec438ad814f9ec95e9333b415fe3550 registry.k8s.io/e2e-test-images/nautilus:1.7],SizeBytes:49641698,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:3fe7acf013d1264ffded116b80a73dc129a449b0fccdb8d21af8279f2233f36e registry.k8s.io/e2e-test-images/httpd:2.4.39-4],SizeBytes:41901587,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:f0c5de192d832e7c1daa6580d4a62e8fa6fc8eabc0917ae4cb7ed4d15e95b59e registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.1],SizeBytes:29725845,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:8240918,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:db048754ae68ae337d8fa96494c96d2a1204c3320f5dcf7e8e71085adec85da6 registry.k8s.io/e2e-test-images/nginx:1.15-4],SizeBytes:7000082,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:5c99cf6a02adda929b10321dbf4ecfa00d87be9ba4fb456006237d530ab4baa1 registry.k8s.io/e2e-test-images/nginx:1.14-4],SizeBytes:6978614,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 12 03:02:49.018: INFO: Logging kubelet events for node i-0747e3c6943e05486 Jan 12 03:02:49.119: INFO: Logging pods the kubelet thinks is on node i-0747e3c6943e05486 Jan 12 03:02:49.223: INFO: cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-nb46v started at 2023-01-12 02:56:01 +0000 UTC (0+1 container statuses recorded) Jan 12 03:02:49.223: INFO: Container cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 ready: true, restart count 0 Jan 12 03:02:49.223: INFO: cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-mxkb7 started at 2023-01-12 02:56:01 +0000 UTC (0+1 container statuses recorded) Jan 12 03:02:49.223: INFO: Container cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 ready: true, restart count 0 Jan 12 03:02:49.223: INFO: cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-nt8r6 started at 2023-01-12 02:56:01 +0000 UTC (0+1 container statuses recorded) Jan 12 03:02:49.223: INFO: Container cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 ready: true, restart count 0 Jan 12 03:02:49.223: INFO: cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-tdmz9 started at 2023-01-12 02:56:00 +0000 UTC (0+1 container statuses recorded) Jan 12 03:02:49.223: INFO: Container cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 ready: true, restart count 0 Jan 12 03:02:49.223: INFO: cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-rgvr5 started at 2023-01-12 02:56:01 +0000 UTC (0+1 container statuses recorded) Jan 12 03:02:49.223: INFO: Container cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 ready: true, restart count 0 Jan 12 03:02:49.223: INFO: cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-vqkq5 started at 2023-01-12 02:56:01 +0000 UTC (0+1 container statuses recorded) Jan 12 03:02:49.223: INFO: Container cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 ready: true, restart count 0 Jan 12 03:02:49.223: INFO: cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-nqbtf started at 2023-01-12 02:56:01 +0000 UTC (0+1 container statuses recorded) Jan 12 03:02:49.223: INFO: Container cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 ready: true, restart count 0 Jan 12 03:02:49.223: INFO: cilium-4s86z started at 2023-01-12 02:34:48 +0000 UTC (1+1 container statuses recorded) Jan 12 03:02:49.223: INFO: Init container clean-cilium-state ready: true, restart count 0 Jan 12 03:02:49.223: INFO: Container cilium-agent ready: true, restart count 0 Jan 12 03:02:49.223: INFO: cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-x8kxx started at 2023-01-12 02:56:01 +0000 UTC (0+1 container statuses recorded) Jan 12 03:02:49.223: INFO: Container cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 ready: true, restart count 0 Jan 12 03:02:49.223: INFO: cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-vfhxr started at 2023-01-12 02:56:01 +0000 UTC (0+1 container statuses recorded) Jan 12 03:02:49.223: INFO: Container cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 ready: true, restart count 0 Jan 12 03:02:49.223: INFO: ebs-csi-node-rs8xs started at 2023-01-12 02:34:48 +0000 UTC (0+3 container statuses recorded) Jan 12 03:02:49.223: INFO: Container ebs-plugin ready: true, restart count 0 Jan 12 03:02:49.223: INFO: Container liveness-probe ready: true, restart count 0 Jan 12 03:02:49.223: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 12 03:02:49.223: INFO: cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-x6pvg started at 2023-01-12 02:56:01 +0000 UTC (0+1 container statuses recorded) Jan 12 03:02:49.223: INFO: Container cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 ready: true, restart count 0 Jan 12 03:02:49.604: INFO: Latency metrics for node i-0747e3c6943e05486 Jan 12 03:02:49.604: INFO: Logging node info for node i-087271e72eee5713b Jan 12 03:02:49.702: INFO: Node Info: &Node{ObjectMeta:{i-087271e72eee5713b a7a69fc2-8177-4d34-98f2-9738138ef772 50298 0 2023-01-12 02:33:09 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:c5.large beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:eu-west-2 failure-domain.beta.kubernetes.io/zone:eu-west-2a kops.k8s.io/kops-controller-pki: kubernetes.io/arch:amd64 kubernetes.io/hostname:i-087271e72eee5713b kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node.kubernetes.io/exclude-from-external-load-balancers: node.kubernetes.io/instance-type:c5.large topology.ebs.csi.aws.com/zone:eu-west-2a topology.kubernetes.io/region:eu-west-2 topology.kubernetes.io/zone:eu-west-2a] map[alpha.kubernetes.io/provided-node-ip:172.20.41.237 csi.volume.kubernetes.io/nodeid:{"ebs.csi.aws.com":"i-087271e72eee5713b"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-12 02:33:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:alpha.kubernetes.io/provided-node-ip":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}}} } {protokube Update v1 2023-01-12 02:33:37 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kops.k8s.io/kops-controller-pki":{},"f:node-role.kubernetes.io/control-plane":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}} } {kube-controller-manager Update v1 2023-01-12 02:33:52 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.0.0/24\"":{}}}} } {aws-cloud-controller-manager Update v1 2023-01-12 02:33:57 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:beta.kubernetes.io/instance-type":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:taints":{}}} } {aws-cloud-controller-manager Update v1 2023-01-12 02:33:57 +0000 UTC FieldsV1 {"f:status":{"f:addresses":{"k:{\"type\":\"ExternalDNS\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"ExternalIP\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"Hostname\"}":{"f:address":{}},"k:{\"type\":\"InternalDNS\"}":{".":{},"f:address":{},"f:type":{}}}}} status} {kubelet Update v1 2023-01-12 03:01:14 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.0.0/24,DoNotUseExternalID:,ProviderID:aws:///eu-west-2a/i-087271e72eee5713b,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/control-plane,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[100.96.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{50478186496 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3918811136 0} {<nil>} 3826964Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{45430367772 0} {<nil>} 45430367772 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3813953536 0} {<nil>} 3724564Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-12 03:01:14 +0000 UTC,LastTransitionTime:2023-01-12 02:33:03 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-12 03:01:14 +0000 UTC,LastTransitionTime:2023-01-12 02:33:03 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-12 03:01:14 +0000 UTC,LastTransitionTime:2023-01-12 02:33:03 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-12 03:01:14 +0000 UTC,LastTransitionTime:2023-01-12 02:33:52 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.41.237,},NodeAddress{Type:ExternalIP,Address:13.40.44.188,},NodeAddress{Type:InternalDNS,Address:i-087271e72eee5713b.eu-west-2.compute.internal,},NodeAddress{Type:Hostname,Address:i-087271e72eee5713b.eu-west-2.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-13-40-44-188.eu-west-2.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec2684ed0ebff6bd7e100dbb20fe020a,SystemUUID:ec2684ed-0ebf-f6bd-7e10-0dbb20fe020a,BootID:6d3aa6dc-d3d9-4719-ab22-415491ec2425,KernelVersion:4.19.0-23-cloud-amd64,OSImage:Debian GNU/Linux 10 (buster),ContainerRuntimeVersion:containerd://1.6.15,KubeletVersion:v1.26.0,KubeProxyVersion:v1.26.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/etcdadm/etcd-manager@sha256:66a453db625abb268f4b3bbefc5a34a171d81e6e8796cecca54cfd71775c77c4 registry.k8s.io/etcdadm/etcd-manager:v3.0.20221209],SizeBytes:231502799,},ContainerImage{Names:[quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5 quay.io/cilium/cilium:v1.12.5],SizeBytes:166719855,},ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.26.0],SizeBytes:135162323,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.26.0],SizeBytes:124991801,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.26.0],SizeBytes:67205320,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.26.0],SizeBytes:57657656,},ContainerImage{Names:[registry.k8s.io/kops/kops-controller:1.26.0-beta.1],SizeBytes:42982334,},ContainerImage{Names:[registry.k8s.io/kops/dns-controller:1.26.0-beta.1],SizeBytes:42804914,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:f0c5de192d832e7c1daa6580d4a62e8fa6fc8eabc0917ae4cb7ed4d15e95b59e registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.1],SizeBytes:29725845,},ContainerImage{Names:[quay.io/cilium/operator@sha256:a6d24a006a6b92967ac90786b49bc1ac26e5477cf028cd1186efcfc2466484db quay.io/cilium/operator:v1.12.5],SizeBytes:26802430,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:122bfb8c1edabb3c0edd63f06523e6940d958d19b3957dc7b1d6f81e9f1f6119 registry.k8s.io/sig-storage/csi-provisioner:v3.1.0],SizeBytes:23345856,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:9ebbf9f023e7b41ccee3d52afe39a89e3ddacdbb69269d583abfc25847cfd9e4 registry.k8s.io/sig-storage/csi-resizer:v1.4.0],SizeBytes:22381475,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:8b9c313c05f54fb04f8d430896f5f5904b6cb157df261501b29adc04d2b2dc7b registry.k8s.io/sig-storage/csi-attacher:v3.4.0],SizeBytes:22085298,},ContainerImage{Names:[registry.k8s.io/provider-aws/cloud-controller-manager@sha256:fdeb61e3e42ecd9cca868d550ebdb88dd6341d9e91fcfa9a37e227dab2ad22cb registry.k8s.io/provider-aws/cloud-controller-manager:v1.26.0],SizeBytes:20154862,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:8240918,},ContainerImage{Names:[registry.k8s.io/kops/kube-apiserver-healthcheck:1.26.0-beta.1],SizeBytes:4967334,},ContainerImage{Names:[registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 12 03:02:49.702: INFO: Logging kubelet events for node i-087271e72eee5713b Jan 12 03:02:49.802: INFO: Logging pods the kubelet thinks is on node i-087271e72eee5713b Jan 12 03:02:49.911: INFO: etcd-manager-main-i-087271e72eee5713b started at 2023-01-12 02:32:35 +0000 UTC (0+1 container statuses recorded) Jan 12 03:02:49.911: INFO: Container etcd-manager ready: true, restart count 0 Jan 12 03:02:49.911: INFO: ebs-csi-node-kmf4s started at 2023-01-12 02:33:37 +0000 UTC (0+3 container statuses recorded) Jan 12 03:02:49.911: INFO: Container ebs-plugin ready: true, restart count 0 Jan 12 03:02:49.911: INFO: Container liveness-probe ready: true, restart count 0 Jan 12 03:02:49.911: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 12 03:02:49.911: INFO: kops-controller-rznh8 started at 2023-01-12 02:33:38 +0000 UTC (0+1 container statuses recorded) Jan 12 03:02:49.911: INFO: Container kops-controller ready: true, restart count 0 Jan 12 03:02:49.911: INFO: dns-controller-56cf649474-hp8f9 started at 2023-01-12 02:33:38 +0000 UTC (0+1 container statuses recorded) Jan 12 03:02:49.911: INFO: Container dns-controller ready: true, restart count 0 Jan 12 03:02:49.911: INFO: ebs-csi-controller-89bddb897-vs5kq started at 2023-01-12 02:33:38 +0000 UTC (0+5 container statuses recorded) Jan 12 03:02:49.911: INFO: Container csi-attacher ready: true, restart count 0 Jan 12 03:02:49.911: INFO: Container csi-provisioner ready: true, restart count 0 Jan 12 03:02:49.911: INFO: Container csi-resizer ready: true, restart count 0 Jan 12 03:02:49.911: INFO: Container ebs-plugin ready: true, restart count 0 Jan 12 03:02:49.911: INFO: Container liveness-probe ready: true, restart count 0 Jan 12 03:02:49.911: INFO: aws-cloud-controller-manager-h98xx started at 2023-01-12 02:33:38 +0000 UTC (0+1 container statuses recorded) Jan 12 03:02:49.911: INFO: Container aws-cloud-controller-manager ready: true, restart count 0 Jan 12 03:02:49.911: INFO: cilium-operator-6f8d6c4bf-x8z6b started at 2023-01-12 02:33:38 +0000 UTC (0+1 container statuses recorded) Jan 12 03:02:49.911: INFO: Container cilium-operator ready: true, restart count 1 Jan 12 03:02:49.911: INFO: kube-scheduler-i-087271e72eee5713b started at 2023-01-12 02:32:35 +0000 UTC (0+1 container statuses recorded) Jan 12 03:02:49.911: INFO: Container kube-scheduler ready: true, restart count 0 Jan 12 03:02:49.911: INFO: etcd-manager-events-i-087271e72eee5713b started at 2023-01-12 02:32:35 +0000 UTC (0+1 container statuses recorded) Jan 12 03:02:49.911: INFO: Container etcd-manager ready: true, restart count 0 Jan 12 03:02:49.911: INFO: kube-apiserver-i-087271e72eee5713b started at 2023-01-12 02:32:35 +0000 UTC (0+2 container statuses recorded) Jan 12 03:02:49.911: INFO: Container healthcheck ready: true, restart count 0 Jan 12 03:02:49.911: INFO: Container kube-apiserver ready: true, restart count 1 Jan 12 03:02:49.911: INFO: kube-controller-manager-i-087271e72eee5713b started at 2023-01-12 02:32:35 +0000 UTC (0+1 container statuses recorded) Jan 12 03:02:49.911: INFO: Container kube-controller-manager ready: true, restart count 2 Jan 12 03:02:49.911: INFO: cilium-wjxb5 started at 2023-01-12 02:33:37 +0000 UTC (1+1 container statuses recorded) Jan 12 03:02:49.911: INFO: Init container clean-cilium-state ready: true, restart count 0 Jan 12 03:02:49.911: INFO: Container cilium-agent ready: true, restart count 1 Jan 12 03:02:50.284: INFO: Latency metrics for node i-087271e72eee5713b Jan 12 03:02:50.284: INFO: Logging node info for node i-09933795397c99cb9 Jan 12 03:02:50.382: INFO: Node Info: &Node{ObjectMeta:{i-09933795397c99cb9 5072dbb3-1bab-4256-ba82-f5b4613ca0ad 50566 0 2023-01-12 02:34:46 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:eu-west-2 failure-domain.beta.kubernetes.io/zone:eu-west-2a kubelet_cleanup:true kubernetes.io/arch:amd64 kubernetes.io/hostname:i-09933795397c99cb9 kubernetes.io/os:linux node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:eu-west-2a topology.hostpath.csi/node:i-09933795397c99cb9 topology.kubernetes.io/region:eu-west-2 topology.kubernetes.io/zone:eu-west-2a] map[alpha.kubernetes.io/provided-node-ip:172.20.63.113 csi.volume.kubernetes.io/nodeid:{"ebs.csi.aws.com":"i-09933795397c99cb9"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{aws-cloud-controller-manager Update v1 2023-01-12 02:34:46 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:beta.kubernetes.io/instance-type":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {aws-cloud-controller-manager Update v1 2023-01-12 02:34:46 +0000 UTC FieldsV1 {"f:status":{"f:addresses":{"k:{\"type\":\"ExternalDNS\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"ExternalIP\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"Hostname\"}":{"f:address":{}},"k:{\"type\":\"InternalDNS\"}":{".":{},"f:address":{},"f:type":{}}}}} status} {kops-controller Update v1 2023-01-12 02:34:46 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:node-role.kubernetes.io/node":{}}}} } {kube-controller-manager Update v1 2023-01-12 02:34:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.3.0/24\"":{}}}} } {kubelet Update v1 2023-01-12 02:34:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:alpha.kubernetes.io/provided-node-ip":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}}} } {e2e.test Update v1 2023-01-12 02:56:00 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubelet_cleanup":{}}}} } {kubelet Update v1 2023-01-12 03:02:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.3.0/24,DoNotUseExternalID:,ProviderID:aws:///eu-west-2a/i-09933795397c99cb9,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{50478186496 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4078190592 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{45430367772 0} {<nil>} 45430367772 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3973332992 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-12 03:02:29 +0000 UTC,LastTransitionTime:2023-01-12 02:34:46 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-12 03:02:29 +0000 UTC,LastTransitionTime:2023-01-12 02:34:46 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-12 03:02:29 +0000 UTC,LastTransitionTime:2023-01-12 02:34:46 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-12 03:02:29 +0000 UTC,LastTransitionTime:2023-01-12 02:34:58 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.63.113,},NodeAddress{Type:ExternalIP,Address:35.176.231.20,},NodeAddress{Type:InternalDNS,Address:i-09933795397c99cb9.eu-west-2.compute.internal,},NodeAddress{Type:Hostname,Address:i-09933795397c99cb9.eu-west-2.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-35-176-231-20.eu-west-2.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec252ab9650339207a7903fc9d22ecb9,SystemUUID:ec252ab9-6503-3920-7a79-03fc9d22ecb9,BootID:76ee20c7-10d2-499b-9d10-c7543451f6da,KernelVersion:4.19.0-23-cloud-amd64,OSImage:Debian GNU/Linux 10 (buster),ContainerRuntimeVersion:containerd://1.6.15,KubeletVersion:v1.26.0,KubeProxyVersion:v1.26.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5 quay.io/cilium/cilium:v1.12.5],SizeBytes:166719855,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.26.0],SizeBytes:67205320,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nautilus@sha256:80ba6c8c44f9623f06e868a1aa66026c8ec438ad814f9ec95e9333b415fe3550 registry.k8s.io/e2e-test-images/nautilus:1.7],SizeBytes:49641698,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:3fe7acf013d1264ffded116b80a73dc129a449b0fccdb8d21af8279f2233f36e registry.k8s.io/e2e-test-images/httpd:2.4.39-4],SizeBytes:41901587,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:f0c5de192d832e7c1daa6580d4a62e8fa6fc8eabc0917ae4cb7ed4d15e95b59e registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.1],SizeBytes:29725845,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nonroot@sha256:ee9f50b3c64b174d296d91ca9f69a914ac30e59095dfb462b2b518ad28a63655 registry.k8s.io/e2e-test-images/nonroot:1.4],SizeBytes:17747885,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:8240918,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:5c99cf6a02adda929b10321dbf4ecfa00d87be9ba4fb456006237d530ab4baa1 registry.k8s.io/e2e-test-images/nginx:1.14-4],SizeBytes:6978614,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 12 03:02:50.382: INFO: Logging kubelet events for node i-09933795397c99cb9 Jan 12 03:02:50.485: INFO: Logging pods the kubelet thinks is on node i-09933795397c99cb9 Jan 12 03:02:50.595: INFO: cilium-gmpv4 started at 2023-01-12 02:34:47 +0000 UTC (1+1 container statuses recorded) Jan 12 03:02:50.595: INFO: Init container clean-cilium-state ready: true, restart count 0 Jan 12 03:02:50.595: INFO: Container cilium-agent ready: true, restart count 0 Jan 12 03:02:50.595: INFO: cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-tpcpj started at 2023-01-12 02:56:01 +0000 UTC (0+1 container statuses recorded) Jan 12 03:02:50.595: INFO: Container cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 ready: true, restart count 0 Jan 12 03:02:50.595: INFO: ebs-csi-node-qbbzk started at 2023-01-12 02:34:47 +0000 UTC (0+3 container statuses recorded) Jan 12 03:02:50.595: INFO: Container ebs-plugin ready: true, restart count 0 Jan 12 03:02:50.595: INFO: Container liveness-probe ready: true, restart count 0 Jan 12 03:02:50.595: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 12 03:02:50.595: INFO: cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-x5w6b started at 2023-01-12 02:56:00 +0000 UTC (0+1 container statuses recorded) Jan 12 03:02:50.595: INFO: Container cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 ready: true, restart count 0 Jan 12 03:02:50.595: INFO: cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-78cx5 started at 2023-01-12 02:56:01 +0000 UTC (0+1 container statuses recorded) Jan 12 03:02:50.595: INFO: Container cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 ready: true, restart count 0 Jan 12 03:02:50.595: INFO: cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-rklf2 started at 2023-01-12 02:56:01 +0000 UTC (0+1 container statuses recorded) Jan 12 03:02:50.595: INFO: Container cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 ready: true, restart count 0 Jan 12 03:02:50.595: INFO: cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-x4sh5 started at 2023-01-12 02:56:01 +0000 UTC (0+1 container statuses recorded) Jan 12 03:02:50.595: INFO: Container cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 ready: true, restart count 0 Jan 12 03:02:50.596: INFO: cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-lr8q4 started at 2023-01-12 02:56:01 +0000 UTC (0+1 container statuses recorded) Jan 12 03:02:50.596: INFO: Container cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 ready: true, restart count 0 Jan 12 03:02:50.596: INFO: cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-sfnct started at 2023-01-12 02:56:01 +0000 UTC (0+1 container statuses recorded) Jan 12 03:02:50.596: INFO: Container cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 ready: true, restart count 0 Jan 12 03:02:50.596: INFO: cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-s688f started at 2023-01-12 02:56:01 +0000 UTC (0+1 container statuses recorded) Jan 12 03:02:50.596: INFO: Container cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 ready: true, restart count 0 Jan 12 03:02:50.596: INFO: cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-jj5bm started at 2023-01-12 02:56:01 +0000 UTC (0+1 container statuses recorded) Jan 12 03:02:50.596: INFO: Container cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 ready: true, restart count 0 Jan 12 03:02:50.596: INFO: cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-4cs8v started at 2023-01-12 02:56:01 +0000 UTC (0+1 container statuses recorded) Jan 12 03:02:50.596: INFO: Container cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 ready: true, restart count 0 Jan 12 03:02:50.596: INFO: coredns-559769c974-hn5g4 started at 2023-01-12 02:35:21 +0000 UTC (0+1 container statuses recorded) Jan 12 03:02:50.596: INFO: Container coredns ready: true, restart count 0 Jan 12 03:02:50.979: INFO: Latency metrics for node i-09933795397c99cb9 [DeferCleanup (Each)] [sig-node] Mount propagation tear down framework | framework.go:193 �[1mSTEP:�[0m Destroying namespace "mount-propagation-8493" for this suite. �[38;5;243m01/12/23 03:02:50.979�[0m
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-node\]\skubelet\sClean\sup\spods\son\snode\skubelet\sshould\sbe\sable\sto\sdelete\s10\spods\sper\snode\sin\s1m0s\.$'
test/e2e/node/kubelet.go:359 k8s.io/kubernetes/test/e2e/node.glob..func5.2.3() test/e2e/node/kubelet.go:359 +0x390from junit_01.xml
[BeforeEach] [sig-node] kubelet set up framework | framework.go:178 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/12/23 02:55:59.332�[0m Jan 12 02:55:59.332: INFO: >>> kubeConfig: /root/.kube/config �[1mSTEP:�[0m Building a namespace api object, basename kubelet �[38;5;243m01/12/23 02:55:59.333�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/12/23 02:55:59.624�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/12/23 02:55:59.815�[0m [BeforeEach] [sig-node] kubelet test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-node] kubelet test/e2e/node/kubelet.go:278 [BeforeEach] Clean up pods on node test/e2e/node/kubelet.go:299 [It] kubelet should be able to delete 10 pods per node in 1m0s. test/e2e/node/kubelet.go:346 �[1mSTEP:�[0m Creating a RC of 40 pods and wait until all pods of this RC are running �[38;5;243m01/12/23 02:56:00.751�[0m �[1mSTEP:�[0m creating replication controller cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 in namespace kubelet-7880 �[38;5;243m01/12/23 02:56:00.751�[0m I0112 02:56:00.878602 6734 runners.go:193] Created replication controller with name: cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5, namespace: kubelet-7880, replica count: 40 Jan 12 02:56:00.965: INFO: Missing info/stats for container "runtime" on node "i-087271e72eee5713b" Jan 12 02:56:01.018: INFO: Missing info/stats for container "runtime" on node "i-02c6337a529a94d2d" Jan 12 02:56:01.386: INFO: Missing info/stats for container "runtime" on node "i-0747e3c6943e05486" Jan 12 02:56:01.406: INFO: Missing info/stats for container "runtime" on node "i-027f21d552ba8db57" Jan 12 02:56:01.451: INFO: Missing info/stats for container "runtime" on node "i-09933795397c99cb9" Jan 12 02:56:06.085: INFO: Missing info/stats for container "runtime" on node "i-087271e72eee5713b" Jan 12 02:56:06.157: INFO: Missing info/stats for container "runtime" on node "i-02c6337a529a94d2d" Jan 12 02:56:06.556: INFO: Missing info/stats for container "runtime" on node "i-027f21d552ba8db57" Jan 12 02:56:06.596: INFO: Missing info/stats for container "runtime" on node "i-09933795397c99cb9" Jan 12 02:56:06.718: INFO: Missing info/stats for container "runtime" on node "i-0747e3c6943e05486" I0112 02:56:11.029979 6734 runners.go:193] cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 Pods: 40 out of 40 created, 6 running, 34 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 12 02:56:11.211: INFO: Missing info/stats for container "runtime" on node "i-087271e72eee5713b" Jan 12 02:56:11.309: INFO: Missing info/stats for container "runtime" on node "i-02c6337a529a94d2d" Jan 12 02:56:11.713: INFO: Missing info/stats for container "runtime" on node "i-027f21d552ba8db57" Jan 12 02:56:11.725: INFO: Missing info/stats for container "runtime" on node "i-09933795397c99cb9" Jan 12 02:56:11.857: INFO: Missing info/stats for container "runtime" on node "i-0747e3c6943e05486" Jan 12 02:56:16.335: INFO: Missing info/stats for container "runtime" on node "i-087271e72eee5713b" Jan 12 02:56:16.443: INFO: Missing info/stats for container "runtime" on node "i-02c6337a529a94d2d" Jan 12 02:56:16.843: INFO: Missing info/stats for container "runtime" on node "i-027f21d552ba8db57" Jan 12 02:56:16.886: INFO: Missing info/stats for container "runtime" on node "i-09933795397c99cb9" Jan 12 02:56:16.985: INFO: Missing info/stats for container "runtime" on node "i-0747e3c6943e05486" I0112 02:56:21.030982 6734 runners.go:193] cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 Pods: 40 out of 40 created, 21 running, 19 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 12 02:56:21.460: INFO: Missing info/stats for container "runtime" on node "i-087271e72eee5713b" Jan 12 02:56:21.639: INFO: Missing info/stats for container "runtime" on node "i-02c6337a529a94d2d" Jan 12 02:56:21.977: INFO: Missing info/stats for container "runtime" on node "i-027f21d552ba8db57" Jan 12 02:56:22.020: INFO: Missing info/stats for container "runtime" on node "i-09933795397c99cb9" Jan 12 02:56:22.111: INFO: Missing info/stats for container "runtime" on node "i-0747e3c6943e05486" Jan 12 02:56:26.584: INFO: Missing info/stats for container "runtime" on node "i-087271e72eee5713b" Jan 12 02:56:26.777: INFO: Missing info/stats for container "runtime" on node "i-02c6337a529a94d2d" Jan 12 02:56:27.116: INFO: Missing info/stats for container "runtime" on node "i-027f21d552ba8db57" Jan 12 02:56:27.147: INFO: Missing info/stats for container "runtime" on node "i-09933795397c99cb9" Jan 12 02:56:27.240: INFO: Missing info/stats for container "runtime" on node "i-0747e3c6943e05486" I0112 02:56:31.032051 6734 runners.go:193] cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 Pods: 40 out of 40 created, 22 running, 18 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 12 02:56:31.713: INFO: Missing info/stats for container "runtime" on node "i-087271e72eee5713b" Jan 12 02:56:31.910: INFO: Missing info/stats for container "runtime" on node "i-02c6337a529a94d2d" Jan 12 02:56:32.252: INFO: Missing info/stats for container "runtime" on node "i-027f21d552ba8db57" Jan 12 02:56:32.342: INFO: Missing info/stats for container "runtime" on node "i-09933795397c99cb9" Jan 12 02:56:32.367: INFO: Missing info/stats for container "runtime" on node "i-0747e3c6943e05486" Jan 12 02:56:36.836: INFO: Missing info/stats for container "runtime" on node "i-087271e72eee5713b" Jan 12 02:56:37.042: INFO: Missing info/stats for container "runtime" on node "i-02c6337a529a94d2d" Jan 12 02:56:37.472: INFO: Missing info/stats for container "runtime" on node "i-027f21d552ba8db57" Jan 12 02:56:37.479: INFO: Missing info/stats for container "runtime" on node "i-09933795397c99cb9" Jan 12 02:56:37.516: INFO: Missing info/stats for container "runtime" on node "i-0747e3c6943e05486" I0112 02:56:41.032889 6734 runners.go:193] cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 Pods: 40 out of 40 created, 23 running, 17 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 12 02:56:41.955: INFO: Missing info/stats for container "runtime" on node "i-087271e72eee5713b" Jan 12 02:56:42.174: INFO: Missing info/stats for container "runtime" on node "i-02c6337a529a94d2d" Jan 12 02:56:42.606: INFO: Missing info/stats for container "runtime" on node "i-027f21d552ba8db57" Jan 12 02:56:42.614: INFO: Missing info/stats for container "runtime" on node "i-09933795397c99cb9" Jan 12 02:56:42.649: INFO: Missing info/stats for container "runtime" on node "i-0747e3c6943e05486" Jan 12 02:56:47.080: INFO: Missing info/stats for container "runtime" on node "i-087271e72eee5713b" Jan 12 02:56:47.313: INFO: Missing info/stats for container "runtime" on node "i-02c6337a529a94d2d" Jan 12 02:56:47.740: INFO: Missing info/stats for container "runtime" on node "i-027f21d552ba8db57" Jan 12 02:56:47.767: INFO: Missing info/stats for container "runtime" on node "i-09933795397c99cb9" Jan 12 02:56:47.783: INFO: Missing info/stats for container "runtime" on node "i-0747e3c6943e05486" I0112 02:56:51.033211 6734 runners.go:193] cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 Pods: 40 out of 40 created, 25 running, 15 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 12 02:56:52.201: INFO: Missing info/stats for container "runtime" on node "i-087271e72eee5713b" Jan 12 02:56:52.450: INFO: Missing info/stats for container "runtime" on node "i-02c6337a529a94d2d" Jan 12 02:56:52.890: INFO: Missing info/stats for container "runtime" on node "i-027f21d552ba8db57" Jan 12 02:56:52.894: INFO: Missing info/stats for container "runtime" on node "i-09933795397c99cb9" Jan 12 02:56:52.909: INFO: Missing info/stats for container "runtime" on node "i-0747e3c6943e05486" Jan 12 02:56:57.321: INFO: Missing info/stats for container "runtime" on node "i-087271e72eee5713b" Jan 12 02:56:57.580: INFO: Missing info/stats for container "runtime" on node "i-02c6337a529a94d2d" Jan 12 02:56:58.027: INFO: Missing info/stats for container "runtime" on node "i-09933795397c99cb9" Jan 12 02:56:58.028: INFO: Missing info/stats for container "runtime" on node "i-027f21d552ba8db57" Jan 12 02:56:58.034: INFO: Missing info/stats for container "runtime" on node "i-0747e3c6943e05486" I0112 02:57:01.034313 6734 runners.go:193] cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 Pods: 40 out of 40 created, 28 running, 12 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 12 02:57:02.446: INFO: Missing info/stats for container "runtime" on node "i-087271e72eee5713b" Jan 12 02:57:02.786: INFO: Missing info/stats for container "runtime" on node "i-02c6337a529a94d2d" Jan 12 02:57:03.158: INFO: Missing info/stats for container "runtime" on node "i-09933795397c99cb9" Jan 12 02:57:03.159: INFO: Missing info/stats for container "runtime" on node "i-0747e3c6943e05486" Jan 12 02:57:03.167: INFO: Missing info/stats for container "runtime" on node "i-027f21d552ba8db57" Jan 12 02:57:07.568: INFO: Missing info/stats for container "runtime" on node "i-087271e72eee5713b" Jan 12 02:57:07.936: INFO: Missing info/stats for container "runtime" on node "i-02c6337a529a94d2d" Jan 12 02:57:08.286: INFO: Missing info/stats for container "runtime" on node "i-09933795397c99cb9" Jan 12 02:57:08.293: INFO: Missing info/stats for container "runtime" on node "i-0747e3c6943e05486" Jan 12 02:57:08.306: INFO: Missing info/stats for container "runtime" on node "i-027f21d552ba8db57" I0112 02:57:11.035162 6734 runners.go:193] cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 Pods: 40 out of 40 created, 29 running, 11 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 12 02:57:12.688: INFO: Missing info/stats for container "runtime" on node "i-087271e72eee5713b" Jan 12 02:57:13.113: INFO: Missing info/stats for container "runtime" on node "i-02c6337a529a94d2d" Jan 12 02:57:13.433: INFO: Missing info/stats for container "runtime" on node "i-027f21d552ba8db57" Jan 12 02:57:13.438: INFO: Missing info/stats for container "runtime" on node "i-0747e3c6943e05486" Jan 12 02:57:13.512: INFO: Missing info/stats for container "runtime" on node "i-09933795397c99cb9" Jan 12 02:57:17.810: INFO: Missing info/stats for container "runtime" on node "i-087271e72eee5713b" Jan 12 02:57:18.262: INFO: Missing info/stats for container "runtime" on node "i-02c6337a529a94d2d" Jan 12 02:57:18.560: INFO: Missing info/stats for container "runtime" on node "i-027f21d552ba8db57" Jan 12 02:57:18.564: INFO: Missing info/stats for container "runtime" on node "i-0747e3c6943e05486" Jan 12 02:57:18.631: INFO: Missing info/stats for container "runtime" on node "i-09933795397c99cb9" I0112 02:57:21.035981 6734 runners.go:193] cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 Pods: 40 out of 40 created, 32 running, 8 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 12 02:57:22.931: INFO: Missing info/stats for container "runtime" on node "i-087271e72eee5713b" Jan 12 02:57:23.393: INFO: Missing info/stats for container "runtime" on node "i-02c6337a529a94d2d" Jan 12 02:57:23.695: INFO: Missing info/stats for container "runtime" on node "i-0747e3c6943e05486" Jan 12 02:57:23.698: INFO: Missing info/stats for container "runtime" on node "i-027f21d552ba8db57" Jan 12 02:57:23.752: INFO: Missing info/stats for container "runtime" on node "i-09933795397c99cb9" Jan 12 02:57:28.057: INFO: Missing info/stats for container "runtime" on node "i-087271e72eee5713b" Jan 12 02:57:28.538: INFO: Missing info/stats for container "runtime" on node "i-02c6337a529a94d2d" Jan 12 02:57:28.817: INFO: Missing info/stats for container "runtime" on node "i-0747e3c6943e05486" Jan 12 02:57:28.823: INFO: Missing info/stats for container "runtime" on node "i-027f21d552ba8db57" Jan 12 02:57:28.878: INFO: Missing info/stats for container "runtime" on node "i-09933795397c99cb9" I0112 02:57:31.036285 6734 runners.go:193] cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 Pods: 40 out of 40 created, 37 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 12 02:57:33.180: INFO: Missing info/stats for container "runtime" on node "i-087271e72eee5713b" Jan 12 02:57:33.666: INFO: Missing info/stats for container "runtime" on node "i-02c6337a529a94d2d" Jan 12 02:57:33.943: INFO: Missing info/stats for container "runtime" on node "i-0747e3c6943e05486" Jan 12 02:57:33.950: INFO: Missing info/stats for container "runtime" on node "i-027f21d552ba8db57" Jan 12 02:57:34.004: INFO: Missing info/stats for container "runtime" on node "i-09933795397c99cb9" Jan 12 02:57:38.300: INFO: Missing info/stats for container "runtime" on node "i-087271e72eee5713b" Jan 12 02:57:38.789: INFO: Missing info/stats for container "runtime" on node "i-02c6337a529a94d2d" Jan 12 02:57:39.071: INFO: Missing info/stats for container "runtime" on node "i-0747e3c6943e05486" Jan 12 02:57:39.125: INFO: Missing info/stats for container "runtime" on node "i-09933795397c99cb9" Jan 12 02:57:39.144: INFO: Missing info/stats for container "runtime" on node "i-027f21d552ba8db57" I0112 02:57:41.036790 6734 runners.go:193] cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 Pods: 40 out of 40 created, 38 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 12 02:57:43.422: INFO: Missing info/stats for container "runtime" on node "i-087271e72eee5713b" Jan 12 02:57:43.926: INFO: Missing info/stats for container "runtime" on node "i-02c6337a529a94d2d" Jan 12 02:57:44.211: INFO: Missing info/stats for container "runtime" on node "i-0747e3c6943e05486" Jan 12 02:57:44.264: INFO: Missing info/stats for container "runtime" on node "i-09933795397c99cb9" Jan 12 02:57:44.279: INFO: Missing info/stats for container "runtime" on node "i-027f21d552ba8db57" Jan 12 02:57:48.550: INFO: Missing info/stats for container "runtime" on node "i-087271e72eee5713b" Jan 12 02:57:49.063: INFO: Missing info/stats for container "runtime" on node "i-02c6337a529a94d2d" Jan 12 02:57:49.335: INFO: Missing info/stats for container "runtime" on node "i-0747e3c6943e05486" Jan 12 02:57:49.405: INFO: Missing info/stats for container "runtime" on node "i-027f21d552ba8db57" Jan 12 02:57:49.437: INFO: Missing info/stats for container "runtime" on node "i-09933795397c99cb9" I0112 02:57:51.037302 6734 runners.go:193] cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 Pods: 40 out of 40 created, 38 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 12 02:57:53.668: INFO: Missing info/stats for container "runtime" on node "i-087271e72eee5713b" Jan 12 02:57:54.222: INFO: Missing info/stats for container "runtime" on node "i-02c6337a529a94d2d" Jan 12 02:57:54.459: INFO: Missing info/stats for container "runtime" on node "i-0747e3c6943e05486" Jan 12 02:57:54.529: INFO: Missing info/stats for container "runtime" on node "i-027f21d552ba8db57" Jan 12 02:57:54.561: INFO: Missing info/stats for container "runtime" on node "i-09933795397c99cb9" Jan 12 02:57:58.791: INFO: Missing info/stats for container "runtime" on node "i-087271e72eee5713b" Jan 12 02:57:59.357: INFO: Missing info/stats for container "runtime" on node "i-02c6337a529a94d2d" Jan 12 02:57:59.593: INFO: Missing info/stats for container "runtime" on node "i-0747e3c6943e05486" Jan 12 02:57:59.673: INFO: Missing info/stats for container "runtime" on node "i-027f21d552ba8db57" Jan 12 02:57:59.688: INFO: Missing info/stats for container "runtime" on node "i-09933795397c99cb9" I0112 02:58:01.038310 6734 runners.go:193] cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 Pods: 40 out of 40 created, 39 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 12 02:58:03.913: INFO: Missing info/stats for container "runtime" on node "i-087271e72eee5713b" Jan 12 02:58:04.485: INFO: Missing info/stats for container "runtime" on node "i-02c6337a529a94d2d" Jan 12 02:58:04.734: INFO: Missing info/stats for container "runtime" on node "i-0747e3c6943e05486" Jan 12 02:58:04.803: INFO: Missing info/stats for container "runtime" on node "i-027f21d552ba8db57" Jan 12 02:58:04.813: INFO: Missing info/stats for container "runtime" on node "i-09933795397c99cb9" Jan 12 02:58:09.035: INFO: Missing info/stats for container "runtime" on node "i-087271e72eee5713b" Jan 12 02:58:09.633: INFO: Missing info/stats for container "runtime" on node "i-02c6337a529a94d2d" Jan 12 02:58:09.959: INFO: Missing info/stats for container "runtime" on node "i-0747e3c6943e05486" Jan 12 02:58:09.978: INFO: Missing info/stats for container "runtime" on node "i-027f21d552ba8db57" Jan 12 02:58:09.990: INFO: Missing info/stats for container "runtime" on node "i-09933795397c99cb9" I0112 02:58:11.039141 6734 runners.go:193] cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 Pods: 40 out of 40 created, 39 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 12 02:58:14.155: INFO: Missing info/stats for container "runtime" on node "i-087271e72eee5713b" Jan 12 02:58:14.762: INFO: Missing info/stats for container "runtime" on node "i-02c6337a529a94d2d" Jan 12 02:58:15.094: INFO: Missing info/stats for container "runtime" on node "i-0747e3c6943e05486" Jan 12 02:58:15.112: INFO: Missing info/stats for container "runtime" on node "i-027f21d552ba8db57" Jan 12 02:58:15.119: INFO: Missing info/stats for container "runtime" on node "i-09933795397c99cb9" Jan 12 02:58:19.274: INFO: Missing info/stats for container "runtime" on node "i-087271e72eee5713b" Jan 12 02:58:19.896: INFO: Missing info/stats for container "runtime" on node "i-02c6337a529a94d2d" Jan 12 02:58:20.217: INFO: Missing info/stats for container "runtime" on node "i-0747e3c6943e05486" Jan 12 02:58:20.239: INFO: Missing info/stats for container "runtime" on node "i-09933795397c99cb9" Jan 12 02:58:20.247: INFO: Missing info/stats for container "runtime" on node "i-027f21d552ba8db57" I0112 02:58:21.040097 6734 runners.go:193] cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 Pods: 40 out of 40 created, 39 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 12 02:58:24.393: INFO: Missing info/stats for container "runtime" on node "i-087271e72eee5713b" Jan 12 02:58:25.056: INFO: Missing info/stats for container "runtime" on node "i-02c6337a529a94d2d" Jan 12 02:58:25.342: INFO: Missing info/stats for container "runtime" on node "i-0747e3c6943e05486" Jan 12 02:58:25.366: INFO: Missing info/stats for container "runtime" on node "i-09933795397c99cb9" Jan 12 02:58:25.372: INFO: Missing info/stats for container "runtime" on node "i-027f21d552ba8db57" Jan 12 02:58:29.513: INFO: Missing info/stats for container "runtime" on node "i-087271e72eee5713b" Jan 12 02:58:30.188: INFO: Missing info/stats for container "runtime" on node "i-02c6337a529a94d2d" Jan 12 02:58:30.466: INFO: Missing info/stats for container "runtime" on node "i-0747e3c6943e05486" Jan 12 02:58:30.495: INFO: Missing info/stats for container "runtime" on node "i-027f21d552ba8db57" Jan 12 02:58:30.527: INFO: Missing info/stats for container "runtime" on node "i-09933795397c99cb9" I0112 02:58:31.040805 6734 runners.go:193] cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 Pods: 40 out of 40 created, 39 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 12 02:58:34.634: INFO: Missing info/stats for container "runtime" on node "i-087271e72eee5713b" Jan 12 02:58:35.347: INFO: Missing info/stats for container "runtime" on node "i-02c6337a529a94d2d" Jan 12 02:58:35.604: INFO: Missing info/stats for container "runtime" on node "i-0747e3c6943e05486" Jan 12 02:58:35.664: INFO: Missing info/stats for container "runtime" on node "i-027f21d552ba8db57" Jan 12 02:58:35.700: INFO: Missing info/stats for container "runtime" on node "i-09933795397c99cb9" Jan 12 02:58:39.754: INFO: Missing info/stats for container "runtime" on node "i-087271e72eee5713b" Jan 12 02:58:40.472: INFO: Missing info/stats for container "runtime" on node "i-02c6337a529a94d2d" Jan 12 02:58:40.736: INFO: Missing info/stats for container "runtime" on node "i-0747e3c6943e05486" Jan 12 02:58:40.807: INFO: Missing info/stats for container "runtime" on node "i-027f21d552ba8db57" Jan 12 02:58:40.832: INFO: Missing info/stats for container "runtime" on node "i-09933795397c99cb9" I0112 02:58:41.041844 6734 runners.go:193] cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 Pods: 40 out of 40 created, 39 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 12 02:58:44.873: INFO: Missing info/stats for container "runtime" on node "i-087271e72eee5713b" Jan 12 02:58:45.607: INFO: Missing info/stats for container "runtime" on node "i-02c6337a529a94d2d" Jan 12 02:58:45.860: INFO: Missing info/stats for container "runtime" on node "i-0747e3c6943e05486" Jan 12 02:58:45.955: INFO: Missing info/stats for container "runtime" on node "i-027f21d552ba8db57" Jan 12 02:58:45.955: INFO: Missing info/stats for container "runtime" on node "i-09933795397c99cb9" Jan 12 02:58:49.995: INFO: Missing info/stats for container "runtime" on node "i-087271e72eee5713b" Jan 12 02:58:50.739: INFO: Missing info/stats for container "runtime" on node "i-02c6337a529a94d2d" Jan 12 02:58:50.985: INFO: Missing info/stats for container "runtime" on node "i-0747e3c6943e05486" I0112 02:58:51.042653 6734 runners.go:193] cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 Pods: 40 out of 40 created, 39 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 12 02:58:51.080: INFO: Missing info/stats for container "runtime" on node "i-09933795397c99cb9" Jan 12 02:58:51.084: INFO: Missing info/stats for container "runtime" on node "i-027f21d552ba8db57" Jan 12 02:58:55.117: INFO: Missing info/stats for container "runtime" on node "i-087271e72eee5713b" Jan 12 02:58:55.893: INFO: Missing info/stats for container "runtime" on node "i-02c6337a529a94d2d" Jan 12 02:58:56.113: INFO: Missing info/stats for container "runtime" on node "i-0747e3c6943e05486" Jan 12 02:58:56.205: INFO: Missing info/stats for container "runtime" on node "i-09933795397c99cb9" Jan 12 02:58:56.206: INFO: Missing info/stats for container "runtime" on node "i-027f21d552ba8db57" Jan 12 02:59:00.237: INFO: Missing info/stats for container "runtime" on node "i-087271e72eee5713b" Jan 12 02:59:01.020: INFO: Missing info/stats for container "runtime" on node "i-02c6337a529a94d2d" I0112 02:59:01.043748 6734 runners.go:193] cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 Pods: 40 out of 40 created, 39 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 12 02:59:01.240: INFO: Missing info/stats for container "runtime" on node "i-0747e3c6943e05486" Jan 12 02:59:01.375: INFO: Missing info/stats for container "runtime" on node "i-027f21d552ba8db57" Jan 12 02:59:01.410: INFO: Missing info/stats for container "runtime" on node "i-09933795397c99cb9" Jan 12 02:59:05.359: INFO: Missing info/stats for container "runtime" on node "i-087271e72eee5713b" Jan 12 02:59:06.150: INFO: Missing info/stats for container "runtime" on node "i-02c6337a529a94d2d" Jan 12 02:59:06.365: INFO: Missing info/stats for container "runtime" on node "i-0747e3c6943e05486" Jan 12 02:59:06.495: INFO: Missing info/stats for container "runtime" on node "i-027f21d552ba8db57" Jan 12 02:59:06.551: INFO: Missing info/stats for container "runtime" on node "i-09933795397c99cb9" Jan 12 02:59:10.481: INFO: Missing info/stats for container "runtime" on node "i-087271e72eee5713b" I0112 02:59:11.044519 6734 runners.go:193] cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 Pods: 40 out of 40 created, 39 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 12 02:59:11.296: INFO: Missing info/stats for container "runtime" on node "i-02c6337a529a94d2d" Jan 12 02:59:11.490: INFO: Missing info/stats for container "runtime" on node "i-0747e3c6943e05486" Jan 12 02:59:11.620: INFO: Missing info/stats for container "runtime" on node "i-027f21d552ba8db57" Jan 12 02:59:11.674: INFO: Missing info/stats for container "runtime" on node "i-09933795397c99cb9" Jan 12 02:59:15.602: INFO: Missing info/stats for container "runtime" on node "i-087271e72eee5713b" Jan 12 02:59:16.488: INFO: Missing info/stats for container "runtime" on node "i-02c6337a529a94d2d" Jan 12 02:59:16.613: INFO: Missing info/stats for container "runtime" on node "i-0747e3c6943e05486" Jan 12 02:59:16.740: INFO: Missing info/stats for container "runtime" on node "i-027f21d552ba8db57" Jan 12 02:59:16.803: INFO: Missing info/stats for container "runtime" on node "i-09933795397c99cb9" Jan 12 02:59:20.722: INFO: Missing info/stats for container "runtime" on node "i-087271e72eee5713b" I0112 02:59:21.045195 6734 runners.go:193] cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 Pods: 40 out of 40 created, 39 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 12 02:59:21.629: INFO: Missing info/stats for container "runtime" on node "i-02c6337a529a94d2d" Jan 12 02:59:21.735: INFO: Missing info/stats for container "runtime" on node "i-0747e3c6943e05486" Jan 12 02:59:21.899: INFO: Missing info/stats for container "runtime" on node "i-027f21d552ba8db57" Jan 12 02:59:21.927: INFO: Missing info/stats for container "runtime" on node "i-09933795397c99cb9" Jan 12 02:59:25.841: INFO: Missing info/stats for container "runtime" on node "i-087271e72eee5713b" Jan 12 02:59:26.757: INFO: Missing info/stats for container "runtime" on node "i-02c6337a529a94d2d" Jan 12 02:59:26.863: INFO: Missing info/stats for container "runtime" on node "i-0747e3c6943e05486" Jan 12 02:59:27.020: INFO: Missing info/stats for container "runtime" on node "i-027f21d552ba8db57" Jan 12 02:59:27.066: INFO: Missing info/stats for container "runtime" on node "i-09933795397c99cb9" Jan 12 02:59:30.965: INFO: Missing info/stats for container "runtime" on node "i-087271e72eee5713b" I0112 02:59:31.046042 6734 runners.go:193] cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 Pods: 40 out of 40 created, 39 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 12 02:59:31.899: INFO: Missing info/stats for container "runtime" on node "i-02c6337a529a94d2d" Jan 12 02:59:31.985: INFO: Missing info/stats for container "runtime" on node "i-0747e3c6943e05486" Jan 12 02:59:32.185: INFO: Missing info/stats for container "runtime" on node "i-027f21d552ba8db57" Jan 12 02:59:32.212: INFO: Missing info/stats for container "runtime" on node "i-09933795397c99cb9" Jan 12 02:59:36.086: INFO: Missing info/stats for container "runtime" on node "i-087271e72eee5713b" Jan 12 02:59:37.023: INFO: Missing info/stats for container "runtime" on node "i-02c6337a529a94d2d" Jan 12 02:59:37.114: INFO: Missing info/stats for container "runtime" on node "i-0747e3c6943e05486" Jan 12 02:59:37.316: INFO: Missing info/stats for container "runtime" on node "i-027f21d552ba8db57" Jan 12 02:59:37.336: INFO: Missing info/stats for container "runtime" on node "i-09933795397c99cb9" I0112 02:59:41.046956 6734 runners.go:193] cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 Pods: 40 out of 40 created, 39 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 12 02:59:41.206: INFO: Missing info/stats for container "runtime" on node "i-087271e72eee5713b" Jan 12 02:59:42.146: INFO: Missing info/stats for container "runtime" on node "i-02c6337a529a94d2d" Jan 12 02:59:42.242: INFO: Missing info/stats for container "runtime" on node "i-0747e3c6943e05486" Jan 12 02:59:42.441: INFO: Missing info/stats for container "runtime" on node "i-027f21d552ba8db57" Jan 12 02:59:42.459: INFO: Missing info/stats for container "runtime" on node "i-09933795397c99cb9" Jan 12 02:59:46.332: INFO: Missing info/stats for container "runtime" on node "i-087271e72eee5713b" Jan 12 02:59:47.279: INFO: Missing info/stats for container "runtime" on node "i-02c6337a529a94d2d" Jan 12 02:59:47.390: INFO: Missing info/stats for container "runtime" on node "i-0747e3c6943e05486" Jan 12 02:59:47.564: INFO: Missing info/stats for container "runtime" on node "i-027f21d552ba8db57" Jan 12 02:59:47.607: INFO: Missing info/stats for container "runtime" on node "i-09933795397c99cb9" I0112 02:59:51.048113 6734 runners.go:193] cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 Pods: 40 out of 40 created, 39 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 12 02:59:51.452: INFO: Missing info/stats for container "runtime" on node "i-087271e72eee5713b" Jan 12 02:59:52.412: INFO: Missing info/stats for container "runtime" on node "i-02c6337a529a94d2d" Jan 12 02:59:52.537: INFO: Missing info/stats for container "runtime" on node "i-0747e3c6943e05486" Jan 12 02:59:52.699: INFO: Missing info/stats for container "runtime" on node "i-027f21d552ba8db57" Jan 12 02:59:52.774: INFO: Missing info/stats for container "runtime" on node "i-09933795397c99cb9" Jan 12 02:59:56.572: INFO: Missing info/stats for container "runtime" on node "i-087271e72eee5713b" Jan 12 02:59:57.538: INFO: Missing info/stats for container "runtime" on node "i-02c6337a529a94d2d" Jan 12 02:59:57.655: INFO: Missing info/stats for container "runtime" on node "i-0747e3c6943e05486" Jan 12 02:59:57.824: INFO: Missing info/stats for container "runtime" on node "i-027f21d552ba8db57" Jan 12 02:59:57.896: INFO: Missing info/stats for container "runtime" on node "i-09933795397c99cb9" I0112 03:00:01.049164 6734 runners.go:193] cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 Pods: 40 out of 40 created, 39 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 12 03:00:01.691: INFO: Missing info/stats for container "runtime" on node "i-087271e72eee5713b" Jan 12 03:00:02.691: INFO: Missing info/stats for container "runtime" on node "i-02c6337a529a94d2d" Jan 12 03:00:02.790: INFO: Missing info/stats for container "runtime" on node "i-0747e3c6943e05486" Jan 12 03:00:02.951: INFO: Missing info/stats for container "runtime" on node "i-027f21d552ba8db57" Jan 12 03:00:03.020: INFO: Missing info/stats for container "runtime" on node "i-09933795397c99cb9" Jan 12 03:00:06.817: INFO: Missing info/stats for container "runtime" on node "i-087271e72eee5713b" Jan 12 03:00:07.882: INFO: Missing info/stats for container "runtime" on node "i-02c6337a529a94d2d" Jan 12 03:00:07.913: INFO: Missing info/stats for container "runtime" on node "i-0747e3c6943e05486" Jan 12 03:00:08.145: INFO: Missing info/stats for container "runtime" on node "i-027f21d552ba8db57" Jan 12 03:00:08.152: INFO: Missing info/stats for container "runtime" on node "i-09933795397c99cb9" I0112 03:00:11.049804 6734 runners.go:193] cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 Pods: 40 out of 40 created, 39 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 12 03:00:11.938: INFO: Missing info/stats for container "runtime" on node "i-087271e72eee5713b" Jan 12 03:00:13.016: INFO: Missing info/stats for container "runtime" on node "i-02c6337a529a94d2d" Jan 12 03:00:13.036: INFO: Missing info/stats for container "runtime" on node "i-0747e3c6943e05486" Jan 12 03:00:13.273: INFO: Missing info/stats for container "runtime" on node "i-027f21d552ba8db57" Jan 12 03:00:13.276: INFO: Missing info/stats for container "runtime" on node "i-09933795397c99cb9" Jan 12 03:00:17.059: INFO: Missing info/stats for container "runtime" on node "i-087271e72eee5713b" Jan 12 03:00:18.158: INFO: Missing info/stats for container "runtime" on node "i-0747e3c6943e05486" Jan 12 03:00:18.160: INFO: Missing info/stats for container "runtime" on node "i-02c6337a529a94d2d" Jan 12 03:00:18.400: INFO: Missing info/stats for container "runtime" on node "i-027f21d552ba8db57" Jan 12 03:00:18.400: INFO: Missing info/stats for container "runtime" on node "i-09933795397c99cb9" I0112 03:00:21.050931 6734 runners.go:193] cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 Pods: 40 out of 40 created, 39 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 12 03:00:22.179: INFO: Missing info/stats for container "runtime" on node "i-087271e72eee5713b" Jan 12 03:00:23.286: INFO: Missing info/stats for container "runtime" on node "i-0747e3c6943e05486" Jan 12 03:00:23.296: INFO: Missing info/stats for container "runtime" on node "i-02c6337a529a94d2d" Jan 12 03:00:23.520: INFO: Missing info/stats for container "runtime" on node "i-027f21d552ba8db57" Jan 12 03:00:23.533: INFO: Missing info/stats for container "runtime" on node "i-09933795397c99cb9" Jan 12 03:00:27.299: INFO: Missing info/stats for container "runtime" on node "i-087271e72eee5713b" Jan 12 03:00:28.402: INFO: Missing info/stats for container "runtime" on node "i-0747e3c6943e05486" Jan 12 03:00:28.422: INFO: Missing info/stats for container "runtime" on node "i-02c6337a529a94d2d" Jan 12 03:00:28.652: INFO: Missing info/stats for container "runtime" on node "i-027f21d552ba8db57" Jan 12 03:00:28.660: INFO: Missing info/stats for container "runtime" on node "i-09933795397c99cb9" I0112 03:00:31.051795 6734 runners.go:193] cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 Pods: 40 out of 40 created, 39 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 12 03:00:32.419: INFO: Missing info/stats for container "runtime" on node "i-087271e72eee5713b" Jan 12 03:00:33.548: INFO: Missing info/stats for container "runtime" on node "i-0747e3c6943e05486" Jan 12 03:00:33.558: INFO: Missing info/stats for container "runtime" on node "i-02c6337a529a94d2d" Jan 12 03:00:33.773: INFO: Missing info/stats for container "runtime" on node "i-027f21d552ba8db57" Jan 12 03:00:33.781: INFO: Missing info/stats for container "runtime" on node "i-09933795397c99cb9" Jan 12 03:00:37.539: INFO: Missing info/stats for container "runtime" on node "i-087271e72eee5713b" Jan 12 03:00:38.669: INFO: Missing info/stats for container "runtime" on node "i-0747e3c6943e05486" Jan 12 03:00:38.691: INFO: Missing info/stats for container "runtime" on node "i-02c6337a529a94d2d" Jan 12 03:00:38.894: INFO: Missing info/stats for container "runtime" on node "i-027f21d552ba8db57" Jan 12 03:00:38.904: INFO: Missing info/stats for container "runtime" on node "i-09933795397c99cb9" I0112 03:00:41.052446 6734 runners.go:193] cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 Pods: 40 out of 40 created, 39 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 12 03:00:42.660: INFO: Missing info/stats for container "runtime" on node "i-087271e72eee5713b" Jan 12 03:00:43.807: INFO: Missing info/stats for container "runtime" on node "i-0747e3c6943e05486" Jan 12 03:00:43.838: INFO: Missing info/stats for container "runtime" on node "i-02c6337a529a94d2d" Jan 12 03:00:44.028: INFO: Missing info/stats for container "runtime" on node "i-027f21d552ba8db57" Jan 12 03:00:44.050: INFO: Missing info/stats for container "runtime" on node "i-09933795397c99cb9" Jan 12 03:00:47.782: INFO: Missing info/stats for container "runtime" on node "i-087271e72eee5713b" Jan 12 03:00:48.930: INFO: Missing info/stats for container "runtime" on node "i-0747e3c6943e05486" Jan 12 03:00:48.980: INFO: Missing info/stats for container "runtime" on node "i-02c6337a529a94d2d" Jan 12 03:00:49.149: INFO: Missing info/stats for container "runtime" on node "i-027f21d552ba8db57" Jan 12 03:00:49.178: INFO: Missing info/stats for container "runtime" on node "i-09933795397c99cb9" I0112 03:00:51.053242 6734 runners.go:193] cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 Pods: 40 out of 40 created, 39 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 12 03:00:52.910: INFO: Missing info/stats for container "runtime" on node "i-087271e72eee5713b" Jan 12 03:00:54.066: INFO: Missing info/stats for container "runtime" on node "i-0747e3c6943e05486" Jan 12 03:00:54.132: INFO: Missing info/stats for container "runtime" on node "i-02c6337a529a94d2d" Jan 12 03:00:54.278: INFO: Missing info/stats for container "runtime" on node "i-027f21d552ba8db57" Jan 12 03:00:54.301: INFO: Missing info/stats for container "runtime" on node "i-09933795397c99cb9" Jan 12 03:00:58.029: INFO: Missing info/stats for container "runtime" on node "i-087271e72eee5713b" Jan 12 03:00:59.198: INFO: Missing info/stats for container "runtime" on node "i-0747e3c6943e05486" Jan 12 03:00:59.260: INFO: Missing info/stats for container "runtime" on node "i-02c6337a529a94d2d" Jan 12 03:00:59.427: INFO: Missing info/stats for container "runtime" on node "i-027f21d552ba8db57" Jan 12 03:00:59.430: INFO: Missing info/stats for container "runtime" on node "i-09933795397c99cb9" I0112 03:01:01.054099 6734 runners.go:193] cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 Pods: 40 out of 40 created, 39 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 12 03:01:03.152: INFO: Missing info/stats for container "runtime" on node "i-087271e72eee5713b" Jan 12 03:01:04.325: INFO: Missing info/stats for container "runtime" on node "i-0747e3c6943e05486" Jan 12 03:01:04.395: INFO: Missing info/stats for container "runtime" on node "i-02c6337a529a94d2d" Jan 12 03:01:04.551: INFO: Missing info/stats for container "runtime" on node "i-027f21d552ba8db57" Jan 12 03:01:04.554: INFO: Missing info/stats for container "runtime" on node "i-09933795397c99cb9" Jan 12 03:01:08.274: INFO: Missing info/stats for container "runtime" on node "i-087271e72eee5713b" Jan 12 03:01:09.446: INFO: Missing info/stats for container "runtime" on node "i-0747e3c6943e05486" Jan 12 03:01:09.532: INFO: Missing info/stats for container "runtime" on node "i-02c6337a529a94d2d" Jan 12 03:01:09.679: INFO: Missing info/stats for container "runtime" on node "i-09933795397c99cb9" Jan 12 03:01:09.684: INFO: Missing info/stats for container "runtime" on node "i-027f21d552ba8db57" I0112 03:01:11.055379 6734 runners.go:193] cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 Pods: 40 out of 40 created, 39 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 12 03:01:13.397: INFO: Missing info/stats for container "runtime" on node "i-087271e72eee5713b" Jan 12 03:01:14.564: INFO: Missing info/stats for container "runtime" on node "i-0747e3c6943e05486" Jan 12 03:01:14.660: INFO: Missing info/stats for container "runtime" on node "i-02c6337a529a94d2d" Jan 12 03:01:14.805: INFO: Missing info/stats for container "runtime" on node "i-027f21d552ba8db57" Jan 12 03:01:14.835: INFO: Missing info/stats for container "runtime" on node "i-09933795397c99cb9" Jan 12 03:01:18.516: INFO: Missing info/stats for container "runtime" on node "i-087271e72eee5713b" Jan 12 03:01:19.684: INFO: Missing info/stats for container "runtime" on node "i-0747e3c6943e05486" Jan 12 03:01:19.795: INFO: Missing info/stats for container "runtime" on node "i-02c6337a529a94d2d" Jan 12 03:01:19.928: INFO: Missing info/stats for container "runtime" on node "i-027f21d552ba8db57" Jan 12 03:01:19.956: INFO: Missing info/stats for container "runtime" on node "i-09933795397c99cb9" I0112 03:01:21.056169 6734 runners.go:193] cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 Pods: 40 out of 40 created, 39 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 12 03:01:23.642: INFO: Missing info/stats for container "runtime" on node "i-087271e72eee5713b" Jan 12 03:01:24.829: INFO: Missing info/stats for container "runtime" on node "i-0747e3c6943e05486" Jan 12 03:01:24.912: INFO: Missing info/stats for container "runtime" on node "i-02c6337a529a94d2d" Jan 12 03:01:25.079: INFO: Missing info/stats for container "runtime" on node "i-09933795397c99cb9" Jan 12 03:01:25.088: INFO: Missing info/stats for container "runtime" on node "i-027f21d552ba8db57" Jan 12 03:01:28.762: INFO: Missing info/stats for container "runtime" on node "i-087271e72eee5713b" Jan 12 03:01:29.951: INFO: Missing info/stats for container "runtime" on node "i-0747e3c6943e05486" Jan 12 03:01:30.036: INFO: Missing info/stats for container "runtime" on node "i-02c6337a529a94d2d" Jan 12 03:01:30.208: INFO: Missing info/stats for container "runtime" on node "i-09933795397c99cb9" Jan 12 03:01:30.218: INFO: Missing info/stats for container "runtime" on node "i-027f21d552ba8db57" I0112 03:01:31.056804 6734 runners.go:193] cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 Pods: 40 out of 40 created, 39 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 12 03:01:33.885: INFO: Missing info/stats for container "runtime" on node "i-087271e72eee5713b" Jan 12 03:01:35.077: INFO: Missing info/stats for container "runtime" on node "i-0747e3c6943e05486" Jan 12 03:01:35.199: INFO: Missing info/stats for container "runtime" on node "i-02c6337a529a94d2d" Jan 12 03:01:35.336: INFO: Missing info/stats for container "runtime" on node "i-09933795397c99cb9" Jan 12 03:01:35.361: INFO: Missing info/stats for container "runtime" on node "i-027f21d552ba8db57" Jan 12 03:01:39.009: INFO: Missing info/stats for container "runtime" on node "i-087271e72eee5713b" Jan 12 03:01:40.201: INFO: Missing info/stats for container "runtime" on node "i-0747e3c6943e05486" Jan 12 03:01:40.371: INFO: Missing info/stats for container "runtime" on node "i-02c6337a529a94d2d" Jan 12 03:01:40.462: INFO: Missing info/stats for container "runtime" on node "i-09933795397c99cb9" Jan 12 03:01:40.488: INFO: Missing info/stats for container "runtime" on node "i-027f21d552ba8db57" I0112 03:01:41.057112 6734 runners.go:193] cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 Pods: 40 out of 40 created, 39 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 12 03:01:44.129: INFO: Missing info/stats for container "runtime" on node "i-087271e72eee5713b" Jan 12 03:01:45.325: INFO: Missing info/stats for container "runtime" on node "i-0747e3c6943e05486" Jan 12 03:01:45.499: INFO: Missing info/stats for container "runtime" on node "i-02c6337a529a94d2d" Jan 12 03:01:45.607: INFO: Missing info/stats for container "runtime" on node "i-09933795397c99cb9" Jan 12 03:01:45.620: INFO: Missing info/stats for container "runtime" on node "i-027f21d552ba8db57" Jan 12 03:01:49.250: INFO: Missing info/stats for container "runtime" on node "i-087271e72eee5713b" Jan 12 03:01:50.447: INFO: Missing info/stats for container "runtime" on node "i-0747e3c6943e05486" Jan 12 03:01:50.627: INFO: Missing info/stats for container "runtime" on node "i-02c6337a529a94d2d" Jan 12 03:01:50.730: INFO: Missing info/stats for container "runtime" on node "i-09933795397c99cb9" Jan 12 03:01:50.787: INFO: Missing info/stats for container "runtime" on node "i-027f21d552ba8db57" I0112 03:01:51.057425 6734 runners.go:193] cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 Pods: 40 out of 40 created, 39 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 12 03:01:54.370: INFO: Missing info/stats for container "runtime" on node "i-087271e72eee5713b" Jan 12 03:01:55.569: INFO: Missing info/stats for container "runtime" on node "i-0747e3c6943e05486" Jan 12 03:01:55.852: INFO: Missing info/stats for container "runtime" on node "i-09933795397c99cb9" Jan 12 03:01:55.928: INFO: Missing info/stats for container "runtime" on node "i-02c6337a529a94d2d" Jan 12 03:01:55.957: INFO: Missing info/stats for container "runtime" on node "i-027f21d552ba8db57" Jan 12 03:01:59.491: INFO: Missing info/stats for container "runtime" on node "i-087271e72eee5713b" Jan 12 03:02:00.690: INFO: Missing info/stats for container "runtime" on node "i-0747e3c6943e05486" Jan 12 03:02:00.979: INFO: Missing info/stats for container "runtime" on node "i-09933795397c99cb9" Jan 12 03:02:01.053: INFO: Missing info/stats for container "runtime" on node "i-02c6337a529a94d2d" I0112 03:02:01.058112 6734 runners.go:193] cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 Pods: 40 out of 40 created, 39 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 12 03:02:01.116: INFO: Missing info/stats for container "runtime" on node "i-027f21d552ba8db57" Jan 12 03:02:04.612: INFO: Missing info/stats for container "runtime" on node "i-087271e72eee5713b" Jan 12 03:02:05.819: INFO: Missing info/stats for container "runtime" on node "i-0747e3c6943e05486" Jan 12 03:02:06.140: INFO: Missing info/stats for container "runtime" on node "i-09933795397c99cb9" Jan 12 03:02:06.180: INFO: Missing info/stats for container "runtime" on node "i-02c6337a529a94d2d" Jan 12 03:02:06.236: INFO: Missing info/stats for container "runtime" on node "i-027f21d552ba8db57" Jan 12 03:02:09.734: INFO: Missing info/stats for container "runtime" on node "i-087271e72eee5713b" Jan 12 03:02:10.939: INFO: Missing info/stats for container "runtime" on node "i-0747e3c6943e05486" I0112 03:02:11.058355 6734 runners.go:193] cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 Pods: 40 out of 40 created, 39 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 12 03:02:11.268: INFO: Missing info/stats for container "runtime" on node "i-09933795397c99cb9" Jan 12 03:02:11.330: INFO: Missing info/stats for container "runtime" on node "i-02c6337a529a94d2d" Jan 12 03:02:11.356: INFO: Missing info/stats for container "runtime" on node "i-027f21d552ba8db57" Jan 12 03:02:14.853: INFO: Missing info/stats for container "runtime" on node "i-087271e72eee5713b" Jan 12 03:02:16.070: INFO: Missing info/stats for container "runtime" on node "i-0747e3c6943e05486" Jan 12 03:02:16.392: INFO: Missing info/stats for container "runtime" on node "i-09933795397c99cb9" Jan 12 03:02:16.477: INFO: Missing info/stats for container "runtime" on node "i-027f21d552ba8db57" Jan 12 03:02:16.519: INFO: Missing info/stats for container "runtime" on node "i-02c6337a529a94d2d" Jan 12 03:02:19.976: INFO: Missing info/stats for container "runtime" on node "i-087271e72eee5713b" I0112 03:02:21.059125 6734 runners.go:193] cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 Pods: 40 out of 40 created, 39 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 12 03:02:21.192: INFO: Missing info/stats for container "runtime" on node "i-0747e3c6943e05486" Jan 12 03:02:21.514: INFO: Missing info/stats for container "runtime" on node "i-09933795397c99cb9" Jan 12 03:02:21.623: INFO: Missing info/stats for container "runtime" on node "i-027f21d552ba8db57" Jan 12 03:02:21.650: INFO: Missing info/stats for container "runtime" on node "i-02c6337a529a94d2d" Jan 12 03:02:25.094: INFO: Missing info/stats for container "runtime" on node "i-087271e72eee5713b" Jan 12 03:02:26.314: INFO: Missing info/stats for container "runtime" on node "i-0747e3c6943e05486" Jan 12 03:02:26.650: INFO: Missing info/stats for container "runtime" on node "i-09933795397c99cb9" Jan 12 03:02:26.748: INFO: Missing info/stats for container "runtime" on node "i-027f21d552ba8db57" Jan 12 03:02:26.835: INFO: Missing info/stats for container "runtime" on node "i-02c6337a529a94d2d" Jan 12 03:02:30.217: INFO: Missing info/stats for container "runtime" on node "i-087271e72eee5713b" I0112 03:02:31.060052 6734 runners.go:193] cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 Pods: 40 out of 40 created, 39 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 12 03:02:31.447: INFO: Missing info/stats for container "runtime" on node "i-0747e3c6943e05486" Jan 12 03:02:31.780: INFO: Missing info/stats for container "runtime" on node "i-09933795397c99cb9" Jan 12 03:02:31.949: INFO: Missing info/stats for container "runtime" on node "i-027f21d552ba8db57" Jan 12 03:02:31.962: INFO: Missing info/stats for container "runtime" on node "i-02c6337a529a94d2d" Jan 12 03:02:35.339: INFO: Missing info/stats for container "runtime" on node "i-087271e72eee5713b" Jan 12 03:02:36.568: INFO: Missing info/stats for container "runtime" on node "i-0747e3c6943e05486" Jan 12 03:02:36.908: INFO: Missing info/stats for container "runtime" on node "i-09933795397c99cb9" Jan 12 03:02:37.078: INFO: Missing info/stats for container "runtime" on node "i-027f21d552ba8db57" Jan 12 03:02:37.116: INFO: Missing info/stats for container "runtime" on node "i-02c6337a529a94d2d" Jan 12 03:02:40.462: INFO: Missing info/stats for container "runtime" on node "i-087271e72eee5713b" I0112 03:02:41.060276 6734 runners.go:193] cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 Pods: 40 out of 40 created, 39 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 12 03:02:41.691: INFO: Missing info/stats for container "runtime" on node "i-0747e3c6943e05486" Jan 12 03:02:42.034: INFO: Missing info/stats for container "runtime" on node "i-09933795397c99cb9" Jan 12 03:02:42.215: INFO: Missing info/stats for container "runtime" on node "i-027f21d552ba8db57" Jan 12 03:02:42.268: INFO: Missing info/stats for container "runtime" on node "i-02c6337a529a94d2d" Jan 12 03:02:45.581: INFO: Missing info/stats for container "runtime" on node "i-087271e72eee5713b" Jan 12 03:02:46.814: INFO: Missing info/stats for container "runtime" on node "i-0747e3c6943e05486" Jan 12 03:02:47.216: INFO: Missing info/stats for container "runtime" on node "i-09933795397c99cb9" Jan 12 03:02:47.341: INFO: Missing info/stats for container "runtime" on node "i-027f21d552ba8db57" Jan 12 03:02:47.399: INFO: Missing info/stats for container "runtime" on node "i-02c6337a529a94d2d" Jan 12 03:02:50.714: INFO: Missing info/stats for container "runtime" on node "i-087271e72eee5713b" I0112 03:02:51.061388 6734 runners.go:193] cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 Pods: 40 out of 40 created, 39 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 12 03:02:51.939: INFO: Missing info/stats for container "runtime" on node "i-0747e3c6943e05486" Jan 12 03:02:52.341: INFO: Missing info/stats for container "runtime" on node "i-09933795397c99cb9" Jan 12 03:02:52.461: INFO: Missing info/stats for container "runtime" on node "i-027f21d552ba8db57" Jan 12 03:02:52.542: INFO: Missing info/stats for container "runtime" on node "i-02c6337a529a94d2d" Jan 12 03:02:55.838: INFO: Missing info/stats for container "runtime" on node "i-087271e72eee5713b" Jan 12 03:02:57.060: INFO: Missing info/stats for container "runtime" on node "i-0747e3c6943e05486" Jan 12 03:02:57.471: INFO: Missing info/stats for container "runtime" on node "i-09933795397c99cb9" Jan 12 03:02:57.590: INFO: Missing info/stats for container "runtime" on node "i-027f21d552ba8db57" Jan 12 03:02:57.718: INFO: Missing info/stats for container "runtime" on node "i-02c6337a529a94d2d" Jan 12 03:03:00.959: INFO: Missing info/stats for container "runtime" on node "i-087271e72eee5713b" I0112 03:03:01.061765 6734 runners.go:193] cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 Pods: 40 out of 40 created, 39 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0112 03:03:01.162773 6734 runners.go:193] Pod cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-2tvd8 i-02c6337a529a94d2d Running <nil> I0112 03:03:01.162832 6734 runners.go:193] Pod cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-4cs8v i-09933795397c99cb9 Running <nil> I0112 03:03:01.162847 6734 runners.go:193] Pod cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-5zq9g i-027f21d552ba8db57 Running <nil> I0112 03:03:01.162859 6734 runners.go:193] Pod cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-78cx5 i-09933795397c99cb9 Running <nil> I0112 03:03:01.162871 6734 runners.go:193] Pod cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-7n9cd i-02c6337a529a94d2d Running <nil> I0112 03:03:01.162883 6734 runners.go:193] Pod cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-929h5 i-02c6337a529a94d2d Running <nil> I0112 03:03:01.162895 6734 runners.go:193] Pod cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-bgpgg i-027f21d552ba8db57 Running <nil> I0112 03:03:01.162912 6734 runners.go:193] Pod cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-brxkt i-02c6337a529a94d2d Running <nil> I0112 03:03:01.162926 6734 runners.go:193] Pod cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-btsbw i-027f21d552ba8db57 Running <nil> I0112 03:03:01.162938 6734 runners.go:193] Pod cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-djm4z i-02c6337a529a94d2d Running <nil> I0112 03:03:01.162950 6734 runners.go:193] Pod cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-h4b5b i-02c6337a529a94d2d Running <nil> I0112 03:03:01.162969 6734 runners.go:193] Pod cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-hr497 i-027f21d552ba8db57 Running <nil> I0112 03:03:01.162982 6734 runners.go:193] Pod cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-jj5bm i-09933795397c99cb9 Running <nil> I0112 03:03:01.162994 6734 runners.go:193] Pod cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-lr8q4 i-09933795397c99cb9 Running <nil> I0112 03:03:01.163006 6734 runners.go:193] Pod cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-mh7wv i-02c6337a529a94d2d Running <nil> I0112 03:03:01.163018 6734 runners.go:193] Pod cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-mxkb7 i-0747e3c6943e05486 Running <nil> I0112 03:03:01.163030 6734 runners.go:193] Pod cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-nb46v i-0747e3c6943e05486 Running <nil> I0112 03:03:01.163042 6734 runners.go:193] Pod cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-nf9x6 i-027f21d552ba8db57 Running <nil> I0112 03:03:01.163055 6734 runners.go:193] Pod cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-nlhdv i-027f21d552ba8db57 Running <nil> I0112 03:03:01.163067 6734 runners.go:193] Pod cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-nqbtf i-0747e3c6943e05486 Running <nil> I0112 03:03:01.163079 6734 runners.go:193] Pod cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-nqd7f i-02c6337a529a94d2d Running <nil> I0112 03:03:01.163091 6734 runners.go:193] Pod cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-nt8r6 i-0747e3c6943e05486 Running <nil> I0112 03:03:01.163102 6734 runners.go:193] Pod cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-rgvr5 i-0747e3c6943e05486 Running <nil> I0112 03:03:01.163114 6734 runners.go:193] Pod cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-rklf2 i-09933795397c99cb9 Running <nil> I0112 03:03:01.163127 6734 runners.go:193] Pod cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-rkv68 i-02c6337a529a94d2d Running <nil> I0112 03:03:01.163139 6734 runners.go:193] Pod cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-rtg9v i-027f21d552ba8db57 Running <nil> I0112 03:03:01.163151 6734 runners.go:193] Pod cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-s688f i-09933795397c99cb9 Running <nil> I0112 03:03:01.163167 6734 runners.go:193] Pod cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-sfnct i-09933795397c99cb9 Running <nil> I0112 03:03:01.163179 6734 runners.go:193] Pod cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-tdmz9 i-0747e3c6943e05486 Running <nil> I0112 03:03:01.163191 6734 runners.go:193] Pod cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-tpcpj i-09933795397c99cb9 Running <nil> I0112 03:03:01.163202 6734 runners.go:193] Pod cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-v7pmj i-027f21d552ba8db57 Running <nil> I0112 03:03:01.163214 6734 runners.go:193] Pod cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-vfhxr i-0747e3c6943e05486 Running <nil> I0112 03:03:01.163226 6734 runners.go:193] Pod cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-vqkq5 i-0747e3c6943e05486 Running <nil> I0112 03:03:01.163239 6734 runners.go:193] Pod cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-w2vt6 i-02c6337a529a94d2d Pending <nil> I0112 03:03:01.163251 6734 runners.go:193] Pod cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-wdxzv i-027f21d552ba8db57 Running <nil> I0112 03:03:01.163262 6734 runners.go:193] Pod cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-x4sh5 i-09933795397c99cb9 Running <nil> I0112 03:03:01.163274 6734 runners.go:193] Pod cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-x5w6b i-09933795397c99cb9 Running <nil> I0112 03:03:01.163286 6734 runners.go:193] Pod cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-x6pvg i-0747e3c6943e05486 Running <nil> I0112 03:03:01.163297 6734 runners.go:193] Pod cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-x8kxx i-0747e3c6943e05486 Running <nil> I0112 03:03:01.163319 6734 runners.go:193] Pod cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-zncrz i-027f21d552ba8db57 Running <nil> Jan 12 03:03:01.163: INFO: Unexpected error: <*errors.errorString | 0xc00116e8c0>: { s: "only 39 pods started out of 40", } Jan 12 03:03:01.163: FAIL: only 39 pods started out of 40 Full Stack Trace k8s.io/kubernetes/test/e2e/node.glob..func5.2.3() test/e2e/node/kubelet.go:359 +0x390 [AfterEach] Clean up pods on node test/e2e/node/kubelet.go:330 �[1mSTEP:�[0m removing the label kubelet_cleanup off the node i-027f21d552ba8db57 �[38;5;243m01/12/23 03:03:01.163�[0m �[1mSTEP:�[0m verifying the node doesn't have the label kubelet_cleanup �[38;5;243m01/12/23 03:03:01.453�[0m �[1mSTEP:�[0m removing the label kubelet_cleanup off the node i-02c6337a529a94d2d �[38;5;243m01/12/23 03:03:01.551�[0m �[1mSTEP:�[0m verifying the node doesn't have the label kubelet_cleanup �[38;5;243m01/12/23 03:03:01.75�[0m �[1mSTEP:�[0m removing the label kubelet_cleanup off the node i-0747e3c6943e05486 �[38;5;243m01/12/23 03:03:01.847�[0m �[1mSTEP:�[0m verifying the node doesn't have the label kubelet_cleanup �[38;5;243m01/12/23 03:03:02.048�[0m �[1mSTEP:�[0m removing the label kubelet_cleanup off the node i-09933795397c99cb9 �[38;5;243m01/12/23 03:03:02.145�[0m �[1mSTEP:�[0m verifying the node doesn't have the label kubelet_cleanup �[38;5;243m01/12/23 03:03:02.344�[0m [AfterEach] [sig-node] kubelet test/e2e/framework/node/init/init.go:32 Jan 12 03:03:02.440: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-node] kubelet test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-node] kubelet dump namespaces | framework.go:196 �[1mSTEP:�[0m dump namespace information after failure �[38;5;243m01/12/23 03:03:02.538�[0m �[1mSTEP:�[0m Collecting events from namespace "kubelet-7880". �[38;5;243m01/12/23 03:03:02.538�[0m �[1mSTEP:�[0m Found 269 events. �[38;5;243m01/12/23 03:03:02.64�[0m Jan 12 03:03:02.640: INFO: At 2023-01-12 02:56:00 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5: {replication-controller } SuccessfulCreate: Created pod: cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-wdxzv Jan 12 03:03:02.640: INFO: At 2023-01-12 02:56:00 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5: {replication-controller } SuccessfulCreate: Created pod: cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-djm4z Jan 12 03:03:02.640: INFO: At 2023-01-12 02:56:00 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5: {replication-controller } SuccessfulCreate: Created pod: cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-x8kxx Jan 12 03:03:02.640: INFO: At 2023-01-12 02:56:00 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5: {replication-controller } SuccessfulCreate: Created pod: cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-x5w6b Jan 12 03:03:02.640: INFO: At 2023-01-12 02:56:00 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5: {replication-controller } SuccessfulCreate: Created pod: cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-tdmz9 Jan 12 03:03:02.640: INFO: At 2023-01-12 02:56:00 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5: {replication-controller } SuccessfulCreate: Created pod: cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-929h5 Jan 12 03:03:02.640: INFO: At 2023-01-12 02:56:00 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5: {replication-controller } SuccessfulCreate: Created pod: cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-v7pmj Jan 12 03:03:02.640: INFO: At 2023-01-12 02:56:00 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-tdmz9: {default-scheduler } Scheduled: Successfully assigned kubelet-7880/cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-tdmz9 to i-0747e3c6943e05486 Jan 12 03:03:02.640: INFO: At 2023-01-12 02:56:00 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-wdxzv: {default-scheduler } Scheduled: Successfully assigned kubelet-7880/cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-wdxzv to i-027f21d552ba8db57 Jan 12 03:03:02.640: INFO: At 2023-01-12 02:56:00 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-x5w6b: {default-scheduler } Scheduled: Successfully assigned kubelet-7880/cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-x5w6b to i-09933795397c99cb9 Jan 12 03:03:02.640: INFO: At 2023-01-12 02:56:01 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5: {replication-controller } SuccessfulCreate: Created pod: cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-nlhdv Jan 12 03:03:02.640: INFO: At 2023-01-12 02:56:01 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5: {replication-controller } SuccessfulCreate: (combined from similar events): Created pod: cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-x4sh5 Jan 12 03:03:02.640: INFO: At 2023-01-12 02:56:01 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5: {replication-controller } SuccessfulCreate: Created pod: cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-78cx5 Jan 12 03:03:02.640: INFO: At 2023-01-12 02:56:01 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-2tvd8: {default-scheduler } Scheduled: Successfully assigned kubelet-7880/cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-2tvd8 to i-02c6337a529a94d2d Jan 12 03:03:02.640: INFO: At 2023-01-12 02:56:01 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-4cs8v: {default-scheduler } Scheduled: Successfully assigned kubelet-7880/cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-4cs8v to i-09933795397c99cb9 Jan 12 03:03:02.640: INFO: At 2023-01-12 02:56:01 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-5zq9g: {default-scheduler } Scheduled: Successfully assigned kubelet-7880/cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-5zq9g to i-027f21d552ba8db57 Jan 12 03:03:02.640: INFO: At 2023-01-12 02:56:01 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-78cx5: {default-scheduler } Scheduled: Successfully assigned kubelet-7880/cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-78cx5 to i-09933795397c99cb9 Jan 12 03:03:02.640: INFO: At 2023-01-12 02:56:01 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-7n9cd: {default-scheduler } Scheduled: Successfully assigned kubelet-7880/cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-7n9cd to i-02c6337a529a94d2d Jan 12 03:03:02.640: INFO: At 2023-01-12 02:56:01 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-929h5: {default-scheduler } Scheduled: Successfully assigned kubelet-7880/cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-929h5 to i-02c6337a529a94d2d Jan 12 03:03:02.640: INFO: At 2023-01-12 02:56:01 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-bgpgg: {default-scheduler } Scheduled: Successfully assigned kubelet-7880/cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-bgpgg to i-027f21d552ba8db57 Jan 12 03:03:02.640: INFO: At 2023-01-12 02:56:01 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-brxkt: {default-scheduler } Scheduled: Successfully assigned kubelet-7880/cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-brxkt to i-02c6337a529a94d2d Jan 12 03:03:02.640: INFO: At 2023-01-12 02:56:01 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-btsbw: {default-scheduler } Scheduled: Successfully assigned kubelet-7880/cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-btsbw to i-027f21d552ba8db57 Jan 12 03:03:02.640: INFO: At 2023-01-12 02:56:01 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-djm4z: {default-scheduler } Scheduled: Successfully assigned kubelet-7880/cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-djm4z to i-02c6337a529a94d2d Jan 12 03:03:02.640: INFO: At 2023-01-12 02:56:01 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-h4b5b: {default-scheduler } Scheduled: Successfully assigned kubelet-7880/cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-h4b5b to i-02c6337a529a94d2d Jan 12 03:03:02.640: INFO: At 2023-01-12 02:56:01 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-hr497: {default-scheduler } Scheduled: Successfully assigned kubelet-7880/cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-hr497 to i-027f21d552ba8db57 Jan 12 03:03:02.640: INFO: At 2023-01-12 02:56:01 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-jj5bm: {default-scheduler } Scheduled: Successfully assigned kubelet-7880/cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-jj5bm to i-09933795397c99cb9 Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:01 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-lr8q4: {default-scheduler } Scheduled: Successfully assigned kubelet-7880/cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-lr8q4 to i-09933795397c99cb9 Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:01 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-mh7wv: {default-scheduler } Scheduled: Successfully assigned kubelet-7880/cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-mh7wv to i-02c6337a529a94d2d Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:01 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-mxkb7: {default-scheduler } Scheduled: Successfully assigned kubelet-7880/cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-mxkb7 to i-0747e3c6943e05486 Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:01 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-nb46v: {default-scheduler } Scheduled: Successfully assigned kubelet-7880/cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-nb46v to i-0747e3c6943e05486 Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:01 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-nf9x6: {default-scheduler } Scheduled: Successfully assigned kubelet-7880/cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-nf9x6 to i-027f21d552ba8db57 Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:01 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-nlhdv: {default-scheduler } Scheduled: Successfully assigned kubelet-7880/cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-nlhdv to i-027f21d552ba8db57 Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:01 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-nqbtf: {default-scheduler } Scheduled: Successfully assigned kubelet-7880/cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-nqbtf to i-0747e3c6943e05486 Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:01 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-nqd7f: {default-scheduler } Scheduled: Successfully assigned kubelet-7880/cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-nqd7f to i-02c6337a529a94d2d Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:01 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-nt8r6: {default-scheduler } Scheduled: Successfully assigned kubelet-7880/cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-nt8r6 to i-0747e3c6943e05486 Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:01 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-rgvr5: {default-scheduler } Scheduled: Successfully assigned kubelet-7880/cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-rgvr5 to i-0747e3c6943e05486 Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:01 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-rklf2: {default-scheduler } Scheduled: Successfully assigned kubelet-7880/cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-rklf2 to i-09933795397c99cb9 Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:01 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-rkv68: {default-scheduler } Scheduled: Successfully assigned kubelet-7880/cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-rkv68 to i-02c6337a529a94d2d Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:01 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-rtg9v: {default-scheduler } Scheduled: Successfully assigned kubelet-7880/cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-rtg9v to i-027f21d552ba8db57 Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:01 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-s688f: {default-scheduler } Scheduled: Successfully assigned kubelet-7880/cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-s688f to i-09933795397c99cb9 Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:01 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-sfnct: {default-scheduler } Scheduled: Successfully assigned kubelet-7880/cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-sfnct to i-09933795397c99cb9 Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:01 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-tpcpj: {default-scheduler } Scheduled: Successfully assigned kubelet-7880/cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-tpcpj to i-09933795397c99cb9 Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:01 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-v7pmj: {default-scheduler } Scheduled: Successfully assigned kubelet-7880/cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-v7pmj to i-027f21d552ba8db57 Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:01 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-vfhxr: {default-scheduler } Scheduled: Successfully assigned kubelet-7880/cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-vfhxr to i-0747e3c6943e05486 Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:01 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-vqkq5: {default-scheduler } Scheduled: Successfully assigned kubelet-7880/cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-vqkq5 to i-0747e3c6943e05486 Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:01 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-w2vt6: {default-scheduler } Scheduled: Successfully assigned kubelet-7880/cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-w2vt6 to i-02c6337a529a94d2d Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:01 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-x4sh5: {default-scheduler } Scheduled: Successfully assigned kubelet-7880/cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-x4sh5 to i-09933795397c99cb9 Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:01 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-x6pvg: {default-scheduler } Scheduled: Successfully assigned kubelet-7880/cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-x6pvg to i-0747e3c6943e05486 Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:01 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-x8kxx: {default-scheduler } Scheduled: Successfully assigned kubelet-7880/cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-x8kxx to i-0747e3c6943e05486 Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:01 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-zncrz: {default-scheduler } Scheduled: Successfully assigned kubelet-7880/cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-zncrz to i-027f21d552ba8db57 Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:02 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-7n9cd: {kubelet i-02c6337a529a94d2d} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "60fb75a86273c22b8b1076cac7cb21b31f5560413045e54d2b165ed3f593c7e5": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:02 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-h4b5b: {kubelet i-02c6337a529a94d2d} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "4393b19bdbbdb30842732a6fd757cabb86d36b07bd29f2226b27559059d86511": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:03 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-2tvd8: {kubelet i-02c6337a529a94d2d} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "f65afd6583da64c58e6c31ab44290b21c2a1e4f47274a4ec0c12182373d48851": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:03 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-78cx5: {kubelet i-09933795397c99cb9} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "7823a31b830e5cfc97caa73012ca336c870549c8d4b2f03f59d714675d0866a2": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:03 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-btsbw: {kubelet i-027f21d552ba8db57} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "2ad465b9665ad4d28eee6cec6eca81e487ef50902e8b189b48c3c8bd30315328": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:03 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-mh7wv: {kubelet i-02c6337a529a94d2d} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "0102ced80a729b2b8d051c5d407badb9fde0208d68623cb98450ca81325ce55a": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:03 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-nb46v: {kubelet i-0747e3c6943e05486} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "82fdfd9d4fe657a6a353d22f8e0a4623ef59e8de0e990ebf4b2be6d0b51e5f2a": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:03 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-nlhdv: {kubelet i-027f21d552ba8db57} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "1fa965c7788db17b788f54d883275e19364d1392aadc87ed0a6278aa7be91c37": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:03 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-nqd7f: {kubelet i-02c6337a529a94d2d} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "2dd030a9302efad312dbcd0e1c3632cf6a34f1448ed4c1153981e8c39991eb25": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:03 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-rgvr5: {kubelet i-0747e3c6943e05486} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "5af4d811be127277f5eb14a4f243e54b5fab3e008e7120a4dd99e6c0a300ef96": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:03 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-rkv68: {kubelet i-02c6337a529a94d2d} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "faa52c10e4a1a511f0951173eb49ab312534802a40f7f27d9967fe46695574ca": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:03 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-rtg9v: {kubelet i-027f21d552ba8db57} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "c1041357af5dd78e7da273579fadf2591dbe7a2a280da232cf53519ecb0b9257": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:03 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-s688f: {kubelet i-09933795397c99cb9} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "1570dc6a81362755e8a62f59def268d380ce8aec88b00c35ee600bb8311143ff": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:03 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-vfhxr: {kubelet i-0747e3c6943e05486} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "e3bf0145a664c7459e456561eebc9bda705e8ebee7256e86693458e3cf14945a": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:03 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-vqkq5: {kubelet i-0747e3c6943e05486} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "83702f246f287df7b999472380603c9f6e207d1b39b2adf76b96395244b67de0": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:03 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-w2vt6: {kubelet i-02c6337a529a94d2d} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "ddaaafae274423cf08b7e35d10fe6d711aa9ce718db933fb1d5712b46ab46842": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:03 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-x6pvg: {kubelet i-0747e3c6943e05486} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "442f894fb033f005c4fd745d3ba75fbfba5c0809551f44bc30b85892cd2a7072": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:04 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-4cs8v: {kubelet i-09933795397c99cb9} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "926267683d1b62f8b7658840be05eced44884fd3236153df5b4bd4a4826357d8": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:04 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-5zq9g: {kubelet i-027f21d552ba8db57} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "b19202d5e65c3f9490faee2db6d45038983dfde8ce0c9c3463a3ab5bda657e08": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:04 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-jj5bm: {kubelet i-09933795397c99cb9} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "c625e6a1bd33fd1e2b4d34dff7aeaea8cdd9c539162e0452f157dcbc4cad476e": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:04 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-lr8q4: {kubelet i-09933795397c99cb9} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "d22bb27968716740d310e7c2fef99c971b18175531c658b12d65261027ae0f58": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:04 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-nqbtf: {kubelet i-0747e3c6943e05486} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "77c939593ba317b7e217eef5409a11af0b2b7657431819aed3ea737a12de0ead": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:04 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-sfnct: {kubelet i-09933795397c99cb9} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "493df33745ca5258ea35ee636743f6994780a6f3f7b49cc660aaddb6b98f4531": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:05 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-djm4z: {kubelet i-02c6337a529a94d2d} Pulled: Container image "registry.k8s.io/pause:3.9" already present on machine Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:05 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-djm4z: {kubelet i-02c6337a529a94d2d} Started: Started container cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:05 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-djm4z: {kubelet i-02c6337a529a94d2d} Created: Created container cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:05 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-wdxzv: {kubelet i-027f21d552ba8db57} Pulled: Container image "registry.k8s.io/pause:3.9" already present on machine Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:05 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-wdxzv: {kubelet i-027f21d552ba8db57} Created: Created container cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:06 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-tpcpj: {kubelet i-09933795397c99cb9} Created: Created container cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:06 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-tpcpj: {kubelet i-09933795397c99cb9} Pulled: Container image "registry.k8s.io/pause:3.9" already present on machine Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:06 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-tpcpj: {kubelet i-09933795397c99cb9} Started: Started container cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:06 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-wdxzv: {kubelet i-027f21d552ba8db57} Started: Started container cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:07 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-929h5: {kubelet i-02c6337a529a94d2d} Started: Started container cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:07 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-929h5: {kubelet i-02c6337a529a94d2d} Created: Created container cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:07 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-929h5: {kubelet i-02c6337a529a94d2d} Pulled: Container image "registry.k8s.io/pause:3.9" already present on machine Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:07 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-brxkt: {kubelet i-02c6337a529a94d2d} Started: Started container cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:07 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-brxkt: {kubelet i-02c6337a529a94d2d} Created: Created container cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:07 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-brxkt: {kubelet i-02c6337a529a94d2d} Pulled: Container image "registry.k8s.io/pause:3.9" already present on machine Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:07 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-nf9x6: {kubelet i-027f21d552ba8db57} Started: Started container cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:07 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-nf9x6: {kubelet i-027f21d552ba8db57} Created: Created container cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:07 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-nf9x6: {kubelet i-027f21d552ba8db57} Pulled: Container image "registry.k8s.io/pause:3.9" already present on machine Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:07 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-v7pmj: {kubelet i-027f21d552ba8db57} Pulled: Container image "registry.k8s.io/pause:3.9" already present on machine Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:07 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-v7pmj: {kubelet i-027f21d552ba8db57} Created: Created container cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:07 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-x8kxx: {kubelet i-0747e3c6943e05486} Started: Started container cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:07 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-x8kxx: {kubelet i-0747e3c6943e05486} Pulled: Container image "registry.k8s.io/pause:3.9" already present on machine Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:07 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-x8kxx: {kubelet i-0747e3c6943e05486} Created: Created container cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:08 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-rklf2: {kubelet i-09933795397c99cb9} Pulled: Container image "registry.k8s.io/pause:3.9" already present on machine Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:08 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-tdmz9: {kubelet i-0747e3c6943e05486} Started: Started container cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:08 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-tdmz9: {kubelet i-0747e3c6943e05486} Created: Created container cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:08 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-tdmz9: {kubelet i-0747e3c6943e05486} Pulled: Container image "registry.k8s.io/pause:3.9" already present on machine Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:08 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-v7pmj: {kubelet i-027f21d552ba8db57} Started: Started container cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:08 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-x4sh5: {kubelet i-09933795397c99cb9} Pulled: Container image "registry.k8s.io/pause:3.9" already present on machine Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:08 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-x4sh5: {kubelet i-09933795397c99cb9} Started: Started container cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:08 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-x4sh5: {kubelet i-09933795397c99cb9} Created: Created container cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:09 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-nt8r6: {kubelet i-0747e3c6943e05486} Created: Created container cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:09 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-nt8r6: {kubelet i-0747e3c6943e05486} Pulled: Container image "registry.k8s.io/pause:3.9" already present on machine Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:09 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-rklf2: {kubelet i-09933795397c99cb9} Created: Created container cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:09 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-rklf2: {kubelet i-09933795397c99cb9} Started: Started container cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:09 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-x5w6b: {kubelet i-09933795397c99cb9} Started: Started container cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:09 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-x5w6b: {kubelet i-09933795397c99cb9} Created: Created container cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:09 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-x5w6b: {kubelet i-09933795397c99cb9} Pulled: Container image "registry.k8s.io/pause:3.9" already present on machine Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:09 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-zncrz: {kubelet i-027f21d552ba8db57} Pulled: Container image "registry.k8s.io/pause:3.9" already present on machine Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:09 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-zncrz: {kubelet i-027f21d552ba8db57} Created: Created container cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:10 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-bgpgg: {kubelet i-027f21d552ba8db57} Created: Created container cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:10 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-bgpgg: {kubelet i-027f21d552ba8db57} Pulled: Container image "registry.k8s.io/pause:3.9" already present on machine Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:10 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-bgpgg: {kubelet i-027f21d552ba8db57} Started: Started container cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:10 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-mxkb7: {kubelet i-0747e3c6943e05486} Pulled: Container image "registry.k8s.io/pause:3.9" already present on machine Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:10 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-mxkb7: {kubelet i-0747e3c6943e05486} Created: Created container cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:10 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-mxkb7: {kubelet i-0747e3c6943e05486} Started: Started container cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:10 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-nt8r6: {kubelet i-0747e3c6943e05486} Started: Started container cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:10 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-zncrz: {kubelet i-027f21d552ba8db57} Started: Started container cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:11 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-hr497: {kubelet i-027f21d552ba8db57} Pulled: Container image "registry.k8s.io/pause:3.9" already present on machine Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:11 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-hr497: {kubelet i-027f21d552ba8db57} Created: Created container cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:11 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-hr497: {kubelet i-027f21d552ba8db57} Started: Started container cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:14 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-7n9cd: {kubelet i-02c6337a529a94d2d} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "025258f680d8032188c9d1464e6352930b7a631e10ba02cd917470057bfe91a8": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:14 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-rkv68: {kubelet i-02c6337a529a94d2d} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "6eff49e57013321c4ff839710f1f7489b82487468d6585b02484357e6b90a7fa": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:14 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-s688f: {kubelet i-09933795397c99cb9} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "c5ef9db7be3041d8836fc3f2f0dbf1118972f58c23b4737a1d46afada3500ec6": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:15 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-2tvd8: {kubelet i-02c6337a529a94d2d} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "2a9e58068264906efca5ee5b53d79191785568547061cdce19efbf525b518962": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:15 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-4cs8v: {kubelet i-09933795397c99cb9} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "24547949c697fca3b111da53ec84fd39e88902fd321107e5f3b5d7696ad893cd": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:15 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-h4b5b: {kubelet i-02c6337a529a94d2d} Pulled: Container image "registry.k8s.io/pause:3.9" already present on machine Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:15 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-nqd7f: {kubelet i-02c6337a529a94d2d} Pulled: Container image "registry.k8s.io/pause:3.9" already present on machine Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:15 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-nqd7f: {kubelet i-02c6337a529a94d2d} Started: Started container cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:15 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-nqd7f: {kubelet i-02c6337a529a94d2d} Created: Created container cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:15 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-vqkq5: {kubelet i-0747e3c6943e05486} Pulled: Container image "registry.k8s.io/pause:3.9" already present on machine Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:15 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-vqkq5: {kubelet i-0747e3c6943e05486} Created: Created container cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:15 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-vqkq5: {kubelet i-0747e3c6943e05486} Started: Started container cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:15 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-w2vt6: {kubelet i-02c6337a529a94d2d} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "8d86bd879a626f13f3c400efc0c2bd0ef7e9a8fe7aa941c4979ccd6ba0cf6c98": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:16 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-78cx5: {kubelet i-09933795397c99cb9} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "2a32c73b229a201746903cd20f3854f2953b2ce439b0839b5793f7eddbf70c19": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:16 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-btsbw: {kubelet i-027f21d552ba8db57} Pulled: Container image "registry.k8s.io/pause:3.9" already present on machine Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:16 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-btsbw: {kubelet i-027f21d552ba8db57} Created: Created container cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:16 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-btsbw: {kubelet i-027f21d552ba8db57} Started: Started container cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:16 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-h4b5b: {kubelet i-02c6337a529a94d2d} Created: Created container cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:16 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-h4b5b: {kubelet i-02c6337a529a94d2d} Started: Started container cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:16 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-lr8q4: {kubelet i-09933795397c99cb9} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "d0b2747cb822ca6546ddb702833cc610e1979b34763fb12a1769b710bea64644": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:16 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-nb46v: {kubelet i-0747e3c6943e05486} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "65e34d2a391f54c037bc31928857adcc9989d4d73b9a3e3cadb171bd0c66b723": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:16 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-rgvr5: {kubelet i-0747e3c6943e05486} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "612cb8196e9cf18f26692bc44c67daee6fa0f3e6eb4ea0cb87855bf2f8486b14": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:16 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-sfnct: {kubelet i-09933795397c99cb9} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "12ed9414419d1723e069af23514f5df969198bcd1be5ff9eaa4409cdebb4ec5a": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:17 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-nlhdv: {kubelet i-027f21d552ba8db57} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "e5322f0cb018aca77633d107b7a041084ecfa5eb602b1839a2cc868f28230ee5": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:17 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-nqbtf: {kubelet i-0747e3c6943e05486} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "ca82c61bf15dfe3994c366581f3f2f68eabacbf23864d708b690d105f94273f3": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:17 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-vfhxr: {kubelet i-0747e3c6943e05486} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "61d28517d15b597277fcb69092148e490cd4faca9909a2f85f0d9a28b6d5f80c": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:18 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-5zq9g: {kubelet i-027f21d552ba8db57} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "510237556dcb6551b000d14920136a16cac49e142a9c6d42f4751aa0d314a3b6": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:18 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-jj5bm: {kubelet i-09933795397c99cb9} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "2712d105e629b8cc6c507b38097f8df492e6dffd5b9a983d220997d6510aeb83": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:18 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-mh7wv: {kubelet i-02c6337a529a94d2d} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "cc81edeb05f878d6cb1c62cacaf77cb4271ea1e71e09967d009910eba0ec0281": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:18 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-x6pvg: {kubelet i-0747e3c6943e05486} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "ab5460fc2ebf0e11b98a29d7076b273f671582dd9ba9085b17e49018d17054d2": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:19 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-rtg9v: {kubelet i-027f21d552ba8db57} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "c673021c9ed03ec9363bb066707edde9896beee103b5b01628cec07de6c5796c": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:27 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-w2vt6: {kubelet i-02c6337a529a94d2d} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "67641f5aa9a6062f818a5906cda2f1c40100e3bebde62e312ca4a000c7dd67f0": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:28 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-2tvd8: {kubelet i-02c6337a529a94d2d} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "013328cdd9d930b793db09c6151cc4229a96e90dd511e3eaeb4c14204d9a338b": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:28 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-nb46v: {kubelet i-0747e3c6943e05486} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "678a21ebfe4136a375663512a7ef49c06b99527277904e20ad0191622725dd3a": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:28 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-nlhdv: {kubelet i-027f21d552ba8db57} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "397b556396465b7b50af068ac96cff1867ecbffca4a13a522b0c99e6e8370067": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:29 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-4cs8v: {kubelet i-09933795397c99cb9} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "866afd6f02844edb0a107a24263c9748a3a8001a0460a8629ce0a8e95e7b5a55": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:29 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-7n9cd: {kubelet i-02c6337a529a94d2d} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "99625c9c2e5bcdee32fbd36fdec0105afa4fe134c23512917627dcff47151184": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:29 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-jj5bm: {kubelet i-09933795397c99cb9} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "67c2fe63d8aefba39763549211f60113344ba1e8837ac185d66de9a1d233f952": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:29 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-rkv68: {kubelet i-02c6337a529a94d2d} Pulled: Container image "registry.k8s.io/pause:3.9" already present on machine Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:29 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-rkv68: {kubelet i-02c6337a529a94d2d} Created: Created container cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:29 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-rkv68: {kubelet i-02c6337a529a94d2d} Started: Started container cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:30 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-78cx5: {kubelet i-09933795397c99cb9} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "273c2a36430f4bc17ae8bd15955a2385a8ea6f8f7399c07275f62b688a5e4093": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:30 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-rgvr5: {kubelet i-0747e3c6943e05486} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "10d9daba03c5e60df7530ba4cdde19e8deeae837ce5e0da89ef33768c2f4628f": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:30 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-s688f: {kubelet i-09933795397c99cb9} Created: Created container cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:30 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-s688f: {kubelet i-09933795397c99cb9} Started: Started container cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:30 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-s688f: {kubelet i-09933795397c99cb9} Pulled: Container image "registry.k8s.io/pause:3.9" already present on machine Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:30 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-sfnct: {kubelet i-09933795397c99cb9} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "1f76c75d3d7bc534be7847b1cc734c4522c6ec036a6dac3c26152b8076df856a": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:31 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-lr8q4: {kubelet i-09933795397c99cb9} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "25dc842b95ea2b3be1d92561a7bcf4e03752a6cbd3e4660fc550b832ec89308e": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:31 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-mh7wv: {kubelet i-02c6337a529a94d2d} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "5f5454280a95bccae6aead14e7725c03effe6327d468333510c5ade9cc4e1bbe": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:31 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-nqbtf: {kubelet i-0747e3c6943e05486} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "595961aa8cb85309811965841e197ef0ba108707a75ad584905fa8c4c3058c7c": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:31 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-vfhxr: {kubelet i-0747e3c6943e05486} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "87f278f833ab1251f0594ecf932c6a8c2c1afd5c72036f657fac414f7d718600": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:33 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-5zq9g: {kubelet i-027f21d552ba8db57} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "c16ec6c2bce35a38f2af0419bcd466f46d59e00acaabc5bc828417510c675d67": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:33 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-rtg9v: {kubelet i-027f21d552ba8db57} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "a8e10f25df955a6fff43798fee810f9297a43a742589cb335bc9f8d903fca56e": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:33 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-x6pvg: {kubelet i-0747e3c6943e05486} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "e1d93e7fed380865302abe832a3ebef0678cb1ef1bd623ffd1704f8f1d963854": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:39 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-nb46v: {kubelet i-0747e3c6943e05486} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "19e30e67ac389f91f02ba091a29f93c5ebac9b374b6fd89218197451140b0413": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:41 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-w2vt6: {kubelet i-02c6337a529a94d2d} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "5fce8062f4a424fe98c524b120b80570b3788928096bf6c79f1e09c957ca1891": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:42 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-7n9cd: {kubelet i-02c6337a529a94d2d} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "b0380fb411fbb746fc9eab9c6b8850b65ac2be6a48cd3d575e76c71aa7acbbd2": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:42 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-lr8q4: {kubelet i-09933795397c99cb9} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "0e291c65ce9d35ae1096e24869f4b728adad192f3a6f510391bb6ba44f867b19": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:43 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-2tvd8: {kubelet i-02c6337a529a94d2d} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "d78962769d97d09f24d27b38303cdbd20cb15af01be65cef0f81a58fc4fae84b": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:43 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-4cs8v: {kubelet i-09933795397c99cb9} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "e5eaa8bd298fa2e724d3b7cdd412634fd234b06f66a574ddc8cce1d49375ea92": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:43 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-78cx5: {kubelet i-09933795397c99cb9} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "343c9c13d0de6e6e5f23a3aea5d372b0ff8dbfdfc89d227421d5c4de79b218ac": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:43 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-nlhdv: {kubelet i-027f21d552ba8db57} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "cfbfd0ef89ec9a2126470ecc95a5fa692776ee9604a83177b73d4d9d908bd800": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:44 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-mh7wv: {kubelet i-02c6337a529a94d2d} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "79e3176c879e944c6599f1d2dc8ccca4ee9d016f624cdfc383548cd38920fd4b": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:44 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-nqbtf: {kubelet i-0747e3c6943e05486} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "e602c7f42a2eeef7158a53a7cabe2c3f93302ee1a4e7f00d54fe7c810853348a": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:44 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-rgvr5: {kubelet i-0747e3c6943e05486} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "5045993de1a30f4d87a2bb50deed3030b4a803f91fb267081678abc9decf7105": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:44 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-vfhxr: {kubelet i-0747e3c6943e05486} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "18b528d61adae4680a1bc653f5e91d0728f94b74f57814e176fea9ee514fdab4": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:45 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-jj5bm: {kubelet i-09933795397c99cb9} Started: Started container cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:45 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-jj5bm: {kubelet i-09933795397c99cb9} Created: Created container cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:45 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-jj5bm: {kubelet i-09933795397c99cb9} Pulled: Container image "registry.k8s.io/pause:3.9" already present on machine Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:45 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-sfnct: {kubelet i-09933795397c99cb9} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "b461eaa0a70b558ff0465d9c31fcee03cb1757a3413962838ac6d335d195e488": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:45 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-x6pvg: {kubelet i-0747e3c6943e05486} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "7c31429db52ecff9fe9ff411d00ec14744af5416a50c7bda6e401cfcf864b436": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:48 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-5zq9g: {kubelet i-027f21d552ba8db57} Started: Started container cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:48 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-5zq9g: {kubelet i-027f21d552ba8db57} Created: Created container cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:48 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-5zq9g: {kubelet i-027f21d552ba8db57} Pulled: Container image "registry.k8s.io/pause:3.9" already present on machine Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:48 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-rtg9v: {kubelet i-027f21d552ba8db57} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "e1b7c8ee4082f770d2889f9b69df4dccd6130ba9208c5d1394a9ba233014859b": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:53 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-7n9cd: {kubelet i-02c6337a529a94d2d} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "94882ed8f2bb734e35730f6e8dcbdce988746b3385863e56635a9b74ec0925b4": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:54 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-w2vt6: {kubelet i-02c6337a529a94d2d} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "074e5624aae80538938fc4c5574c32e4e04eba8f895f8c3d561f0456ed3f6966": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:55 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-78cx5: {kubelet i-09933795397c99cb9} Created: Created container cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:55 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-78cx5: {kubelet i-09933795397c99cb9} Pulled: Container image "registry.k8s.io/pause:3.9" already present on machine Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:55 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-nb46v: {kubelet i-0747e3c6943e05486} Pulled: Container image "registry.k8s.io/pause:3.9" already present on machine Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:55 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-nb46v: {kubelet i-0747e3c6943e05486} Started: Started container cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:55 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-nb46v: {kubelet i-0747e3c6943e05486} Created: Created container cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:56 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-2tvd8: {kubelet i-02c6337a529a94d2d} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "a6a875ef4b4cc8b2861f715ac1b4154fed2b205827a616fd65a47745bf79eaf7": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:56 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-78cx5: {kubelet i-09933795397c99cb9} Started: Started container cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:56 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-lr8q4: {kubelet i-09933795397c99cb9} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "c3ec8df52f458d2b1300208778a0be800d372bb097c9c2f2f109a9b1f8e8d4e2": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:57 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-4cs8v: {kubelet i-09933795397c99cb9} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "2fda1caf1429fd5ebdf21997b484d397b74e7fbf3762c9c3315ce109f5f5b34d": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:57 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-nlhdv: {kubelet i-027f21d552ba8db57} Started: Started container cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:57 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-nlhdv: {kubelet i-027f21d552ba8db57} Created: Created container cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:57 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-nlhdv: {kubelet i-027f21d552ba8db57} Pulled: Container image "registry.k8s.io/pause:3.9" already present on machine Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:57 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-sfnct: {kubelet i-09933795397c99cb9} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "950ef7ecdd0884b8aca51ab620149029af6039446027c63a519e41ffc143a61c": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:58 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-mh7wv: {kubelet i-02c6337a529a94d2d} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "27990d5f0f9bd82124b5ff8dae11467b1e9fab89af7dcd0346869ab5174afec0": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:58 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-rgvr5: {kubelet i-0747e3c6943e05486} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "cbedb31b4eed217f8e876bfffbf8facfc435d5476c1bb46c602bbb80acad1774": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:59 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-nqbtf: {kubelet i-0747e3c6943e05486} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "ad2053965f88abab8ae660a103f4b2edca532842a55004aff120bd913f8c5a7b": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:59 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-rtg9v: {kubelet i-027f21d552ba8db57} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "683e075b3c5220c933794a67ea53df3888818237cafaf2be8cbac81ce9c69c3e": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 12 03:03:02.641: INFO: At 2023-01-12 02:56:59 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-vfhxr: {kubelet i-0747e3c6943e05486} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "2d949152be8a89ebbeb3f5f89476d827d9f31a66b9d26800841c6b9cf3e8636e": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 12 03:03:02.641: INFO: At 2023-01-12 02:57:00 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-x6pvg: {kubelet i-0747e3c6943e05486} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "3659ed9a8e7892563d2693c097103ee14bd3ecedfe4540d8c88d3cb648c364fd": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 12 03:03:02.641: INFO: At 2023-01-12 02:57:04 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-7n9cd: {kubelet i-02c6337a529a94d2d} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "9ff11bf44e9393493b911dd36f85eb5e5c4b82fa849d0436eba6c6ddb9fa47c0": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 12 03:03:02.641: INFO: At 2023-01-12 02:57:08 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-w2vt6: {kubelet i-02c6337a529a94d2d} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "d51cbd7f14a34f59df51103790f0cb261865b7cc7174aa006a797d678193f54e": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 12 03:03:02.641: INFO: At 2023-01-12 02:57:09 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-2tvd8: {kubelet i-02c6337a529a94d2d} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "618fff2ce2dad838bf54b8c45112bf4fa5a8d1abb9a711aeb7ca637dd4a8b929": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 12 03:03:02.641: INFO: At 2023-01-12 02:57:09 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-4cs8v: {kubelet i-09933795397c99cb9} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "663710fd41958476bc908b88e4c16631a11ab1f4a6cc3594e0dbda3ecfa7f516": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 12 03:03:02.641: INFO: At 2023-01-12 02:57:09 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-lr8q4: {kubelet i-09933795397c99cb9} Created: Created container cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 Jan 12 03:03:02.641: INFO: At 2023-01-12 02:57:09 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-lr8q4: {kubelet i-09933795397c99cb9} Pulled: Container image "registry.k8s.io/pause:3.9" already present on machine Jan 12 03:03:02.641: INFO: At 2023-01-12 02:57:10 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-lr8q4: {kubelet i-09933795397c99cb9} Started: Started container cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 Jan 12 03:03:02.641: INFO: At 2023-01-12 02:57:10 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-mh7wv: {kubelet i-02c6337a529a94d2d} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "97dd0139fa19169c38f3ba96245d4a6f53426286154b7d551d4221dcd5dfcb63": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 12 03:03:02.641: INFO: At 2023-01-12 02:57:11 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-sfnct: {kubelet i-09933795397c99cb9} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "b5d57f9330a2c4fea653de4ff6af4931603b9174732dc332800ae7ba0d97d566": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 12 03:03:02.641: INFO: At 2023-01-12 02:57:12 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-nqbtf: {kubelet i-0747e3c6943e05486} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "d27c0235fa401776a95a5af61cafee028453616dd0d9e167e2b2e950347eebd2": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 12 03:03:02.641: INFO: At 2023-01-12 02:57:13 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-rgvr5: {kubelet i-0747e3c6943e05486} Started: Started container cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 Jan 12 03:03:02.641: INFO: At 2023-01-12 02:57:13 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-rgvr5: {kubelet i-0747e3c6943e05486} Pulled: Container image "registry.k8s.io/pause:3.9" already present on machine Jan 12 03:03:02.641: INFO: At 2023-01-12 02:57:13 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-rgvr5: {kubelet i-0747e3c6943e05486} Created: Created container cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 Jan 12 03:03:02.641: INFO: At 2023-01-12 02:57:13 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-rtg9v: {kubelet i-027f21d552ba8db57} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "61c48915538793b64a0360942b1bb8e46f59b51bcc30c42e4ba374b52e7937f9": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 12 03:03:02.641: INFO: At 2023-01-12 02:57:15 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-vfhxr: {kubelet i-0747e3c6943e05486} Created: Created container cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 Jan 12 03:03:02.641: INFO: At 2023-01-12 02:57:15 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-vfhxr: {kubelet i-0747e3c6943e05486} Started: Started container cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 Jan 12 03:03:02.641: INFO: At 2023-01-12 02:57:15 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-vfhxr: {kubelet i-0747e3c6943e05486} Pulled: Container image "registry.k8s.io/pause:3.9" already present on machine Jan 12 03:03:02.641: INFO: At 2023-01-12 02:57:15 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-x6pvg: {kubelet i-0747e3c6943e05486} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "c31e5c88dbe1cf1bbfb4e9311546e9a045d8e23f0675574b563405b56bd9a6a0": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 12 03:03:02.641: INFO: At 2023-01-12 02:57:19 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-7n9cd: {kubelet i-02c6337a529a94d2d} Pulled: Container image "registry.k8s.io/pause:3.9" already present on machine Jan 12 03:03:02.641: INFO: At 2023-01-12 02:57:19 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-7n9cd: {kubelet i-02c6337a529a94d2d} Created: Created container cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 Jan 12 03:03:02.641: INFO: At 2023-01-12 02:57:20 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-7n9cd: {kubelet i-02c6337a529a94d2d} Started: Started container cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 Jan 12 03:03:02.641: INFO: At 2023-01-12 02:57:20 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-w2vt6: {kubelet i-02c6337a529a94d2d} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "8776f2e02b4fd491a050a0082d86616fda5a42345144cc4f0b466aa0922d7cee": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 12 03:03:02.641: INFO: At 2023-01-12 02:57:24 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-2tvd8: {kubelet i-02c6337a529a94d2d} Created: Created container cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 Jan 12 03:03:02.641: INFO: At 2023-01-12 02:57:24 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-2tvd8: {kubelet i-02c6337a529a94d2d} Started: Started container cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 Jan 12 03:03:02.641: INFO: At 2023-01-12 02:57:24 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-2tvd8: {kubelet i-02c6337a529a94d2d} Pulled: Container image "registry.k8s.io/pause:3.9" already present on machine Jan 12 03:03:02.641: INFO: At 2023-01-12 02:57:25 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-4cs8v: {kubelet i-09933795397c99cb9} Created: Created container cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 Jan 12 03:03:02.641: INFO: At 2023-01-12 02:57:25 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-4cs8v: {kubelet i-09933795397c99cb9} Pulled: Container image "registry.k8s.io/pause:3.9" already present on machine Jan 12 03:03:02.641: INFO: At 2023-01-12 02:57:25 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-mh7wv: {kubelet i-02c6337a529a94d2d} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "62fb6df8def16535fe937d3099cd35bbcba40fcc1dc75edf56016869dfed0746": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 12 03:03:02.641: INFO: At 2023-01-12 02:57:25 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-rtg9v: {kubelet i-027f21d552ba8db57} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "d93610036e502d6684661bcbcaf51b55187b01ed4e5346aa6cca0b66139f4d22": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 12 03:03:02.641: INFO: At 2023-01-12 02:57:26 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-4cs8v: {kubelet i-09933795397c99cb9} Started: Started container cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 Jan 12 03:03:02.641: INFO: At 2023-01-12 02:57:27 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-nqbtf: {kubelet i-0747e3c6943e05486} Created: Created container cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 Jan 12 03:03:02.641: INFO: At 2023-01-12 02:57:27 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-nqbtf: {kubelet i-0747e3c6943e05486} Started: Started container cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 Jan 12 03:03:02.641: INFO: At 2023-01-12 02:57:27 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-nqbtf: {kubelet i-0747e3c6943e05486} Pulled: Container image "registry.k8s.io/pause:3.9" already present on machine Jan 12 03:03:02.641: INFO: At 2023-01-12 02:57:27 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-sfnct: {kubelet i-09933795397c99cb9} Pulled: Container image "registry.k8s.io/pause:3.9" already present on machine Jan 12 03:03:02.641: INFO: At 2023-01-12 02:57:27 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-sfnct: {kubelet i-09933795397c99cb9} Started: Started container cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 Jan 12 03:03:02.641: INFO: At 2023-01-12 02:57:27 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-sfnct: {kubelet i-09933795397c99cb9} Created: Created container cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 Jan 12 03:03:02.641: INFO: At 2023-01-12 02:57:28 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-x6pvg: {kubelet i-0747e3c6943e05486} Pulled: Container image "registry.k8s.io/pause:3.9" already present on machine Jan 12 03:03:02.641: INFO: At 2023-01-12 02:57:28 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-x6pvg: {kubelet i-0747e3c6943e05486} Created: Created container cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 Jan 12 03:03:02.641: INFO: At 2023-01-12 02:57:28 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-x6pvg: {kubelet i-0747e3c6943e05486} Started: Started container cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 Jan 12 03:03:02.641: INFO: At 2023-01-12 02:57:31 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-w2vt6: {kubelet i-02c6337a529a94d2d} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "ac491d5712b7e155d5e53d84c77cafbb33e9f8b8742af9422ce72a112b7d2754": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 12 03:03:02.641: INFO: At 2023-01-12 02:57:37 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-mh7wv: {kubelet i-02c6337a529a94d2d} Pulled: Container image "registry.k8s.io/pause:3.9" already present on machine Jan 12 03:03:02.641: INFO: At 2023-01-12 02:57:37 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-mh7wv: {kubelet i-02c6337a529a94d2d} Created: Created container cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 Jan 12 03:03:02.641: INFO: At 2023-01-12 02:57:38 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-mh7wv: {kubelet i-02c6337a529a94d2d} Started: Started container cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 Jan 12 03:03:02.641: INFO: At 2023-01-12 02:57:39 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-rtg9v: {kubelet i-027f21d552ba8db57} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "87faa8b702c9ca2db1155e746764a57309e6f3deacbc16a1df3efc41e85461da": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 12 03:03:02.641: INFO: At 2023-01-12 02:57:46 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-w2vt6: {kubelet i-02c6337a529a94d2d} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "a7b2dd4a793da2a33d746c45c1840de27c203ef1b4cd9fd644ef243828987720": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 12 03:03:02.641: INFO: At 2023-01-12 02:57:53 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-rtg9v: {kubelet i-027f21d552ba8db57} Pulled: Container image "registry.k8s.io/pause:3.9" already present on machine Jan 12 03:03:02.641: INFO: At 2023-01-12 02:57:53 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-rtg9v: {kubelet i-027f21d552ba8db57} Created: Created container cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 Jan 12 03:03:02.641: INFO: At 2023-01-12 02:57:53 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-rtg9v: {kubelet i-027f21d552ba8db57} Started: Started container cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 Jan 12 03:03:02.641: INFO: At 2023-01-12 02:58:01 +0000 UTC - event for cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-w2vt6: {kubelet i-02c6337a529a94d2d} FailedCreatePodSandBox: (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "38d23e778a9f96baba84dccf8855d931072e6d2f880047feec5edcc3bfd06cb3": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 12 03:03:02.742: INFO: POD NODE PHASE GRACE CONDITIONS Jan 12 03:03:02.742: INFO: cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-2tvd8 i-02c6337a529a94d2d Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 02:56:01 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 02:57:25 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 02:57:25 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 02:56:01 +0000 UTC }] Jan 12 03:03:02.742: INFO: cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-4cs8v i-09933795397c99cb9 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 02:56:01 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 02:57:26 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 02:57:26 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 02:56:01 +0000 UTC }] Jan 12 03:03:02.742: INFO: cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-5zq9g i-027f21d552ba8db57 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 02:56:01 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 02:56:48 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 02:56:48 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 02:56:01 +0000 UTC }] Jan 12 03:03:02.742: INFO: cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-78cx5 i-09933795397c99cb9 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 02:56:01 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 02:56:56 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 02:56:56 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 02:56:01 +0000 UTC }] Jan 12 03:03:02.742: INFO: cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-7n9cd i-02c6337a529a94d2d Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 02:56:01 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 02:57:20 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 02:57:20 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 02:56:01 +0000 UTC }] Jan 12 03:03:02.742: INFO: cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-929h5 i-02c6337a529a94d2d Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 02:56:01 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 02:56:08 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 02:56:08 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 02:56:00 +0000 UTC }] Jan 12 03:03:02.742: INFO: cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-bgpgg i-027f21d552ba8db57 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 02:56:01 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 02:56:10 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 02:56:10 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 02:56:01 +0000 UTC }] Jan 12 03:03:02.742: INFO: cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-brxkt i-02c6337a529a94d2d Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 02:56:01 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 02:56:08 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 02:56:08 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 02:56:01 +0000 UTC }] Jan 12 03:03:02.742: INFO: cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-btsbw i-027f21d552ba8db57 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 02:56:01 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 02:56:16 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 02:56:16 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 02:56:01 +0000 UTC }] Jan 12 03:03:02.742: INFO: cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-djm4z i-02c6337a529a94d2d Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 02:56:01 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 02:56:06 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 02:56:06 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 02:56:00 +0000 UTC }] Jan 12 03:03:02.742: INFO: cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-h4b5b i-02c6337a529a94d2d Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 02:56:01 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 02:56:16 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 02:56:16 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 02:56:01 +0000 UTC }] Jan 12 03:03:02.742: INFO: cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-hr497 i-027f21d552ba8db57 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 02:56:01 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 02:56:11 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 02:56:11 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 02:56:01 +0000 UTC }] Jan 12 03:03:02.742: INFO: cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-jj5bm i-09933795397c99cb9 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 02:56:01 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 02:56:46 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 02:56:46 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 02:56:01 +0000 UTC }] Jan 12 03:03:02.742: INFO: cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-lr8q4 i-09933795397c99cb9 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 02:56:01 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 02:57:10 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 02:57:10 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 02:56:01 +0000 UTC }] Jan 12 03:03:02.742: INFO: cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-mh7wv i-02c6337a529a94d2d Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 02:56:01 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 02:57:38 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 02:57:38 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 02:56:01 +0000 UTC }] Jan 12 03:03:02.742: INFO: cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-mxkb7 i-0747e3c6943e05486 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 02:56:01 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 02:56:10 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 02:56:10 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 02:56:01 +0000 UTC }] Jan 12 03:03:02.742: INFO: cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-nb46v i-0747e3c6943e05486 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 02:56:01 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 02:56:55 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 02:56:55 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 02:56:01 +0000 UTC }] Jan 12 03:03:02.742: INFO: cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-nf9x6 i-027f21d552ba8db57 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 02:56:01 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 02:56:07 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 02:56:07 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 02:56:01 +0000 UTC }] Jan 12 03:03:02.742: INFO: cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-nlhdv i-027f21d552ba8db57 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 02:56:01 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 02:56:57 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 02:56:57 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 02:56:01 +0000 UTC }] Jan 12 03:03:02.742: INFO: cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-nqbtf i-0747e3c6943e05486 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 02:56:01 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 02:57:27 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 02:57:27 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 02:56:01 +0000 UTC }] Jan 12 03:03:02.742: INFO: cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-nqd7f i-02c6337a529a94d2d Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 02:56:01 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 02:56:16 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 02:56:16 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 02:56:01 +0000 UTC }] Jan 12 03:03:02.742: INFO: cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-nt8r6 i-0747e3c6943e05486 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 02:56:01 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 02:56:10 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 02:56:10 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 02:56:01 +0000 UTC }] Jan 12 03:03:02.742: INFO: cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-rgvr5 i-0747e3c6943e05486 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 02:56:01 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 02:57:13 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 02:57:13 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 02:56:01 +0000 UTC }] Jan 12 03:03:02.742: INFO: cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-rklf2 i-09933795397c99cb9 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 02:56:01 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 02:56:09 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 02:56:09 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 02:56:01 +0000 UTC }] Jan 12 03:03:02.742: INFO: cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-rkv68 i-02c6337a529a94d2d Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 02:56:01 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 02:56:30 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 02:56:30 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 02:56:01 +0000 UTC }] Jan 12 03:03:02.742: INFO: cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-rtg9v i-027f21d552ba8db57 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 02:56:01 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 02:57:54 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 02:57:54 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 02:56:01 +0000 UTC }] Jan 12 03:03:02.742: INFO: cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-s688f i-09933795397c99cb9 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 02:56:01 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 02:56:31 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 02:56:31 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 02:56:01 +0000 UTC }] Jan 12 03:03:02.742: INFO: cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-sfnct i-09933795397c99cb9 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 02:56:01 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 02:57:28 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 02:57:28 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 02:56:01 +0000 UTC }] Jan 12 03:03:02.742: INFO: cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-tdmz9 i-0747e3c6943e05486 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 02:56:00 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 02:56:09 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 02:56:09 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 02:56:00 +0000 UTC }] Jan 12 03:03:02.742: INFO: cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-tpcpj i-09933795397c99cb9 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 02:56:01 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 02:56:06 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 02:56:06 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 02:56:01 +0000 UTC }] Jan 12 03:03:02.742: INFO: cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-v7pmj i-027f21d552ba8db57 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 02:56:01 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 02:56:08 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 02:56:08 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 02:56:00 +0000 UTC }] Jan 12 03:03:02.742: INFO: cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-vfhxr i-0747e3c6943e05486 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 02:56:01 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 02:57:15 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 02:57:15 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 02:56:01 +0000 UTC }] Jan 12 03:03:02.742: INFO: cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-vqkq5 i-0747e3c6943e05486 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 02:56:01 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 02:56:16 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 02:56:16 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 02:56:01 +0000 UTC }] Jan 12 03:03:02.742: INFO: cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-w2vt6 i-02c6337a529a94d2d Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 02:56:01 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 02:56:01 +0000 UTC ContainersNotReady containers with unready status: [cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-12 02:56:01 +0000 UTC ContainersNotReady containers with unready status: [cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 02:56:01 +0000 UTC }] Jan 12 03:03:02.742: INFO: cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-wdxzv i-027f21d552ba8db57 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 02:56:00 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 02:56:06 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 02:56:06 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 02:56:00 +0000 UTC }] Jan 12 03:03:02.742: INFO: cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-x4sh5 i-09933795397c99cb9 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 02:56:01 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 02:56:08 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 02:56:08 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 02:56:01 +0000 UTC }] Jan 12 03:03:02.742: INFO: cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-x5w6b i-09933795397c99cb9 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 02:56:00 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 02:56:09 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 02:56:09 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 02:56:00 +0000 UTC }] Jan 12 03:03:02.742: INFO: cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-x6pvg i-0747e3c6943e05486 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 02:56:01 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 02:57:28 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 02:57:28 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 02:56:01 +0000 UTC }] Jan 12 03:03:02.742: INFO: cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-x8kxx i-0747e3c6943e05486 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 02:56:01 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 02:56:08 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 02:56:08 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 02:56:00 +0000 UTC }] Jan 12 03:03:02.742: INFO: cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-zncrz i-027f21d552ba8db57 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 02:56:01 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 02:56:10 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 02:56:10 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-12 02:56:01 +0000 UTC }] Jan 12 03:03:02.742: INFO: Jan 12 03:03:03.336: INFO: Logging node info for node i-027f21d552ba8db57 Jan 12 03:03:03.433: INFO: Node Info: &Node{ObjectMeta:{i-027f21d552ba8db57 afc72005-0dff-4853-9ebc-4181b449b438 50706 0 2023-01-12 02:34:45 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:eu-west-2 failure-domain.beta.kubernetes.io/zone:eu-west-2a kubernetes.io/arch:amd64 kubernetes.io/hostname:i-027f21d552ba8db57 kubernetes.io/os:linux node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:eu-west-2a topology.hostpath.csi/node:i-027f21d552ba8db57 topology.kubernetes.io/region:eu-west-2 topology.kubernetes.io/zone:eu-west-2a] map[alpha.kubernetes.io/provided-node-ip:172.20.47.82 csi.volume.kubernetes.io/nodeid:{"ebs.csi.aws.com":"i-027f21d552ba8db57"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{aws-cloud-controller-manager Update v1 2023-01-12 02:34:45 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:beta.kubernetes.io/instance-type":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {aws-cloud-controller-manager Update v1 2023-01-12 02:34:45 +0000 UTC FieldsV1 {"f:status":{"f:addresses":{"k:{\"type\":\"ExternalDNS\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"ExternalIP\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"Hostname\"}":{"f:address":{}},"k:{\"type\":\"InternalDNS\"}":{".":{},"f:address":{},"f:type":{}}}}} status} {kops-controller Update v1 2023-01-12 02:34:45 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:node-role.kubernetes.io/node":{}}}} } {kube-controller-manager Update v1 2023-01-12 02:34:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.2.0/24\"":{}}}} } {kubelet Update v1 2023-01-12 02:34:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:alpha.kubernetes.io/provided-node-ip":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}}} } {kubelet Update v1 2023-01-12 03:02:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.2.0/24,DoNotUseExternalID:,ProviderID:aws:///eu-west-2a/i-027f21d552ba8db57,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{50478186496 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4078190592 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{45430367772 0} {<nil>} 45430367772 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3973332992 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-12 03:02:29 +0000 UTC,LastTransitionTime:2023-01-12 02:34:45 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-12 03:02:29 +0000 UTC,LastTransitionTime:2023-01-12 02:34:45 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-12 03:02:29 +0000 UTC,LastTransitionTime:2023-01-12 02:34:45 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-12 03:02:29 +0000 UTC,LastTransitionTime:2023-01-12 02:34:58 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.47.82,},NodeAddress{Type:ExternalIP,Address:18.170.218.178,},NodeAddress{Type:InternalDNS,Address:i-027f21d552ba8db57.eu-west-2.compute.internal,},NodeAddress{Type:Hostname,Address:i-027f21d552ba8db57.eu-west-2.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-18-170-218-178.eu-west-2.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec28b8d29728f0c796d7d7c40a269a36,SystemUUID:ec28b8d2-9728-f0c7-96d7-d7c40a269a36,BootID:98da75b1-fa84-4aa0-b022-7917ae65922c,KernelVersion:4.19.0-23-cloud-amd64,OSImage:Debian GNU/Linux 10 (buster),ContainerRuntimeVersion:containerd://1.6.15,KubeletVersion:v1.26.0,KubeProxyVersion:v1.26.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5 quay.io/cilium/cilium:v1.12.5],SizeBytes:166719855,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c registry.k8s.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.26.0],SizeBytes:67205320,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nautilus@sha256:80ba6c8c44f9623f06e868a1aa66026c8ec438ad814f9ec95e9333b415fe3550 registry.k8s.io/e2e-test-images/nautilus:1.7],SizeBytes:49641698,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:3fe7acf013d1264ffded116b80a73dc129a449b0fccdb8d21af8279f2233f36e registry.k8s.io/e2e-test-images/httpd:2.4.39-4],SizeBytes:41901587,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:f0c5de192d832e7c1daa6580d4a62e8fa6fc8eabc0917ae4cb7ed4d15e95b59e registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.1],SizeBytes:29725845,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/sample-apiserver@sha256:8d70890151aa5d096f331cb9da1b9cd5be0412b7363fe67b5c3befdcaa2a28d0 registry.k8s.io/e2e-test-images/sample-apiserver:1.17.7],SizeBytes:25667066,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:8240918,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:db048754ae68ae337d8fa96494c96d2a1204c3320f5dcf7e8e71085adec85da6 registry.k8s.io/e2e-test-images/nginx:1.15-4],SizeBytes:7000082,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:5c99cf6a02adda929b10321dbf4ecfa00d87be9ba4fb456006237d530ab4baa1 registry.k8s.io/e2e-test-images/nginx:1.14-4],SizeBytes:6978614,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac registry.k8s.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 12 03:03:03.434: INFO: Logging kubelet events for node i-027f21d552ba8db57 Jan 12 03:03:03.533: INFO: Logging pods the kubelet thinks is on node i-027f21d552ba8db57 Jan 12 03:03:03.637: INFO: cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-5zq9g started at 2023-01-12 02:56:01 +0000 UTC (0+1 container statuses recorded) Jan 12 03:03:03.638: INFO: Container cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 ready: true, restart count 0 Jan 12 03:03:03.638: INFO: cilium-9zwcj started at 2023-01-12 02:34:46 +0000 UTC (1+1 container statuses recorded) Jan 12 03:03:03.638: INFO: Init container clean-cilium-state ready: true, restart count 0 Jan 12 03:03:03.638: INFO: Container cilium-agent ready: true, restart count 0 Jan 12 03:03:03.638: INFO: cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-v7pmj started at 2023-01-12 02:56:01 +0000 UTC (0+1 container statuses recorded) Jan 12 03:03:03.638: INFO: Container cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 ready: true, restart count 0 Jan 12 03:03:03.638: INFO: cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-nf9x6 started at 2023-01-12 02:56:01 +0000 UTC (0+1 container statuses recorded) Jan 12 03:03:03.638: INFO: Container cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 ready: true, restart count 0 Jan 12 03:03:03.638: INFO: cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-rtg9v started at 2023-01-12 02:56:01 +0000 UTC (0+1 container statuses recorded) Jan 12 03:03:03.638: INFO: Container cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 ready: true, restart count 0 Jan 12 03:03:03.638: INFO: cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-zncrz started at 2023-01-12 02:56:01 +0000 UTC (0+1 container statuses recorded) Jan 12 03:03:03.638: INFO: Container cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 ready: true, restart count 0 Jan 12 03:03:03.638: INFO: cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-bgpgg started at 2023-01-12 02:56:01 +0000 UTC (0+1 container statuses recorded) Jan 12 03:03:03.638: INFO: Container cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 ready: true, restart count 0 Jan 12 03:03:03.638: INFO: cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-nlhdv started at 2023-01-12 02:56:01 +0000 UTC (0+1 container statuses recorded) Jan 12 03:03:03.638: INFO: Container cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 ready: true, restart count 0 Jan 12 03:03:03.638: INFO: cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-hr497 started at 2023-01-12 02:56:01 +0000 UTC (0+1 container statuses recorded) Jan 12 03:03:03.638: INFO: Container cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 ready: true, restart count 0 Jan 12 03:03:03.638: INFO: ebs-csi-node-tkggt started at 2023-01-12 02:34:46 +0000 UTC (0+3 container statuses recorded) Jan 12 03:03:03.638: INFO: Container ebs-plugin ready: true, restart count 0 Jan 12 03:03:03.638: INFO: Container liveness-probe ready: true, restart count 0 Jan 12 03:03:03.638: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 12 03:03:03.638: INFO: cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-wdxzv started at 2023-01-12 02:56:00 +0000 UTC (0+1 container statuses recorded) Jan 12 03:03:03.638: INFO: Container cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 ready: true, restart count 0 Jan 12 03:03:03.638: INFO: cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-btsbw started at 2023-01-12 02:56:01 +0000 UTC (0+1 container statuses recorded) Jan 12 03:03:03.638: INFO: Container cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 ready: true, restart count 0 Jan 12 03:03:04.036: INFO: Latency metrics for node i-027f21d552ba8db57 Jan 12 03:03:04.036: INFO: Logging node info for node i-02c6337a529a94d2d Jan 12 03:03:04.133: INFO: Node Info: &Node{ObjectMeta:{i-02c6337a529a94d2d 09fd98e2-daf3-4206-adbe-9bde1cb707ea 50709 0 2023-01-12 02:34:44 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:eu-west-2 failure-domain.beta.kubernetes.io/zone:eu-west-2a kubernetes.io/arch:amd64 kubernetes.io/hostname:i-02c6337a529a94d2d kubernetes.io/os:linux node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:eu-west-2a topology.hostpath.csi/node:i-02c6337a529a94d2d topology.kubernetes.io/region:eu-west-2 topology.kubernetes.io/zone:eu-west-2a] map[alpha.kubernetes.io/provided-node-ip:172.20.40.254 csi.volume.kubernetes.io/nodeid:{"ebs.csi.aws.com":"i-02c6337a529a94d2d"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{aws-cloud-controller-manager Update v1 2023-01-12 02:34:44 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:beta.kubernetes.io/instance-type":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {aws-cloud-controller-manager Update v1 2023-01-12 02:34:44 +0000 UTC FieldsV1 {"f:status":{"f:addresses":{"k:{\"type\":\"ExternalDNS\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"ExternalIP\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"Hostname\"}":{"f:address":{}},"k:{\"type\":\"InternalDNS\"}":{".":{},"f:address":{},"f:type":{}}}}} status} {kops-controller Update v1 2023-01-12 02:34:44 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:node-role.kubernetes.io/node":{}}}} } {kube-controller-manager Update v1 2023-01-12 02:34:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.1.0/24\"":{}}}} } {kubelet Update v1 2023-01-12 02:34:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:alpha.kubernetes.io/provided-node-ip":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}}} } {kubelet Update v1 2023-01-12 03:02:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.1.0/24,DoNotUseExternalID:,ProviderID:aws:///eu-west-2a/i-02c6337a529a94d2d,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{50478186496 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4078190592 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{45430367772 0} {<nil>} 45430367772 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3973332992 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-12 03:02:31 +0000 UTC,LastTransitionTime:2023-01-12 02:34:44 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-12 03:02:31 +0000 UTC,LastTransitionTime:2023-01-12 02:34:44 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-12 03:02:31 +0000 UTC,LastTransitionTime:2023-01-12 02:34:44 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-12 03:02:31 +0000 UTC,LastTransitionTime:2023-01-12 02:34:57 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.40.254,},NodeAddress{Type:ExternalIP,Address:13.40.115.70,},NodeAddress{Type:InternalDNS,Address:i-02c6337a529a94d2d.eu-west-2.compute.internal,},NodeAddress{Type:Hostname,Address:i-02c6337a529a94d2d.eu-west-2.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-13-40-115-70.eu-west-2.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec26b04c9710d692801011afeecf33f6,SystemUUID:ec26b04c-9710-d692-8010-11afeecf33f6,BootID:109b6331-8cb7-41df-87ac-5d30a4302c3f,KernelVersion:4.19.0-23-cloud-amd64,OSImage:Debian GNU/Linux 10 (buster),ContainerRuntimeVersion:containerd://1.6.15,KubeletVersion:v1.26.0,KubeProxyVersion:v1.26.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5 quay.io/cilium/cilium:v1.12.5],SizeBytes:166719855,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c registry.k8s.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.26.0],SizeBytes:67205320,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nautilus@sha256:80ba6c8c44f9623f06e868a1aa66026c8ec438ad814f9ec95e9333b415fe3550 registry.k8s.io/e2e-test-images/nautilus:1.7],SizeBytes:49641698,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:3fe7acf013d1264ffded116b80a73dc129a449b0fccdb8d21af8279f2233f36e registry.k8s.io/e2e-test-images/httpd:2.4.39-4],SizeBytes:41901587,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:f0c5de192d832e7c1daa6580d4a62e8fa6fc8eabc0917ae4cb7ed4d15e95b59e registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.1],SizeBytes:29725845,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:68d396900aeaa072c1f27289485fdac29834045a6f3ffe369bf389d830ef572d registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.6],SizeBytes:20293261,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf registry.k8s.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:18651485,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:8240918,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:db048754ae68ae337d8fa96494c96d2a1204c3320f5dcf7e8e71085adec85da6 registry.k8s.io/e2e-test-images/nginx:1.15-4],SizeBytes:7000082,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:5c99cf6a02adda929b10321dbf4ecfa00d87be9ba4fb456006237d530ab4baa1 registry.k8s.io/e2e-test-images/nginx:1.14-4],SizeBytes:6978614,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:2110879,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 12 03:03:04.134: INFO: Logging kubelet events for node i-02c6337a529a94d2d Jan 12 03:03:04.233: INFO: Logging pods the kubelet thinks is on node i-02c6337a529a94d2d Jan 12 03:03:04.336: INFO: cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-929h5 started at 2023-01-12 02:56:01 +0000 UTC (0+1 container statuses recorded) Jan 12 03:03:04.336: INFO: Container cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 ready: true, restart count 0 Jan 12 03:03:04.336: INFO: cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-brxkt started at 2023-01-12 02:56:01 +0000 UTC (0+1 container statuses recorded) Jan 12 03:03:04.336: INFO: Container cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 ready: true, restart count 0 Jan 12 03:03:04.336: INFO: cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-2tvd8 started at 2023-01-12 02:56:01 +0000 UTC (0+1 container statuses recorded) Jan 12 03:03:04.336: INFO: Container cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 ready: true, restart count 0 Jan 12 03:03:04.336: INFO: ebs-csi-node-znqk8 started at 2023-01-12 02:34:45 +0000 UTC (0+3 container statuses recorded) Jan 12 03:03:04.336: INFO: Container ebs-plugin ready: true, restart count 0 Jan 12 03:03:04.336: INFO: Container liveness-probe ready: true, restart count 0 Jan 12 03:03:04.336: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 12 03:03:04.336: INFO: cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-h4b5b started at 2023-01-12 02:56:01 +0000 UTC (0+1 container statuses recorded) Jan 12 03:03:04.336: INFO: Container cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 ready: true, restart count 0 Jan 12 03:03:04.336: INFO: cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-w2vt6 started at 2023-01-12 02:56:01 +0000 UTC (0+1 container statuses recorded) Jan 12 03:03:04.336: INFO: Container cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 ready: false, restart count 0 Jan 12 03:03:04.336: INFO: coredns-559769c974-vfctb started at 2023-01-12 02:34:58 +0000 UTC (0+1 container statuses recorded) Jan 12 03:03:04.336: INFO: Container coredns ready: true, restart count 0 Jan 12 03:03:04.336: INFO: cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-nqd7f started at 2023-01-12 02:56:01 +0000 UTC (0+1 container statuses recorded) Jan 12 03:03:04.336: INFO: Container cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 ready: true, restart count 0 Jan 12 03:03:04.336: INFO: cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-rkv68 started at 2023-01-12 02:56:01 +0000 UTC (0+1 container statuses recorded) Jan 12 03:03:04.336: INFO: Container cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 ready: true, restart count 0 Jan 12 03:03:04.336: INFO: coredns-autoscaler-7cb5c5b969-8hxnk started at 2023-01-12 02:34:58 +0000 UTC (0+1 container statuses recorded) Jan 12 03:03:04.336: INFO: Container autoscaler ready: true, restart count 0 Jan 12 03:03:04.336: INFO: cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-7n9cd started at 2023-01-12 02:56:01 +0000 UTC (0+1 container statuses recorded) Jan 12 03:03:04.336: INFO: Container cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 ready: true, restart count 0 Jan 12 03:03:04.336: INFO: cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-mh7wv started at 2023-01-12 02:56:01 +0000 UTC (0+1 container statuses recorded) Jan 12 03:03:04.336: INFO: Container cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 ready: true, restart count 0 Jan 12 03:03:04.336: INFO: cilium-8zgmk started at 2023-01-12 02:34:45 +0000 UTC (1+1 container statuses recorded) Jan 12 03:03:04.336: INFO: Init container clean-cilium-state ready: true, restart count 0 Jan 12 03:03:04.336: INFO: Container cilium-agent ready: true, restart count 0 Jan 12 03:03:04.336: INFO: cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-djm4z started at 2023-01-12 02:56:01 +0000 UTC (0+1 container statuses recorded) Jan 12 03:03:04.336: INFO: Container cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 ready: true, restart count 0 Jan 12 03:03:04.747: INFO: Latency metrics for node i-02c6337a529a94d2d Jan 12 03:03:04.747: INFO: Logging node info for node i-0747e3c6943e05486 Jan 12 03:03:04.844: INFO: Node Info: &Node{ObjectMeta:{i-0747e3c6943e05486 94025611-bb76-47e9-8f7a-10abc64c0736 50712 0 2023-01-12 02:34:47 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:eu-west-2 failure-domain.beta.kubernetes.io/zone:eu-west-2a io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:i-0747e3c6943e05486 kubernetes.io/os:linux node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:eu-west-2a topology.hostpath.csi/node:i-0747e3c6943e05486 topology.kubernetes.io/region:eu-west-2 topology.kubernetes.io/zone:eu-west-2a] map[alpha.kubernetes.io/provided-node-ip:172.20.53.236 csi.volume.kubernetes.io/nodeid:{"ebs.csi.aws.com":"i-0747e3c6943e05486"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{aws-cloud-controller-manager Update v1 2023-01-12 02:34:47 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:beta.kubernetes.io/instance-type":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {aws-cloud-controller-manager Update v1 2023-01-12 02:34:47 +0000 UTC FieldsV1 {"f:status":{"f:addresses":{"k:{\"type\":\"ExternalDNS\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"ExternalIP\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"Hostname\"}":{"f:address":{}},"k:{\"type\":\"InternalDNS\"}":{".":{},"f:address":{},"f:type":{}}}}} status} {kops-controller Update v1 2023-01-12 02:34:47 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:node-role.kubernetes.io/node":{}}}} } {kube-controller-manager Update v1 2023-01-12 02:34:47 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.4.0/24\"":{}}}} } {kubelet Update v1 2023-01-12 02:34:47 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:alpha.kubernetes.io/provided-node-ip":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}}} } {kubelet Update v1 2023-01-12 03:02:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:io.kubernetes.storage.mock/node":{},"f:topology.ebs.csi.aws.com/zone":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.4.0/24,DoNotUseExternalID:,ProviderID:aws:///eu-west-2a/i-0747e3c6943e05486,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{50478186496 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4078190592 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{45430367772 0} {<nil>} 45430367772 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3973332992 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-12 03:02:30 +0000 UTC,LastTransitionTime:2023-01-12 02:34:47 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-12 03:02:30 +0000 UTC,LastTransitionTime:2023-01-12 02:34:47 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-12 03:02:30 +0000 UTC,LastTransitionTime:2023-01-12 02:34:47 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-12 03:02:30 +0000 UTC,LastTransitionTime:2023-01-12 02:35:00 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.53.236,},NodeAddress{Type:ExternalIP,Address:18.130.192.62,},NodeAddress{Type:InternalDNS,Address:i-0747e3c6943e05486.eu-west-2.compute.internal,},NodeAddress{Type:Hostname,Address:i-0747e3c6943e05486.eu-west-2.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-18-130-192-62.eu-west-2.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec27fc6850706b89bd5a8bfef3f05abd,SystemUUID:ec27fc68-5070-6b89-bd5a-8bfef3f05abd,BootID:89fc9100-39de-47af-b873-b81034bcd3ed,KernelVersion:4.19.0-23-cloud-amd64,OSImage:Debian GNU/Linux 10 (buster),ContainerRuntimeVersion:containerd://1.6.15,KubeletVersion:v1.26.0,KubeProxyVersion:v1.26.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5 quay.io/cilium/cilium:v1.12.5],SizeBytes:166719855,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c registry.k8s.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.26.0],SizeBytes:67205320,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nautilus@sha256:80ba6c8c44f9623f06e868a1aa66026c8ec438ad814f9ec95e9333b415fe3550 registry.k8s.io/e2e-test-images/nautilus:1.7],SizeBytes:49641698,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:3fe7acf013d1264ffded116b80a73dc129a449b0fccdb8d21af8279f2233f36e registry.k8s.io/e2e-test-images/httpd:2.4.39-4],SizeBytes:41901587,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:f0c5de192d832e7c1daa6580d4a62e8fa6fc8eabc0917ae4cb7ed4d15e95b59e registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.1],SizeBytes:29725845,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:8240918,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:db048754ae68ae337d8fa96494c96d2a1204c3320f5dcf7e8e71085adec85da6 registry.k8s.io/e2e-test-images/nginx:1.15-4],SizeBytes:7000082,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:5c99cf6a02adda929b10321dbf4ecfa00d87be9ba4fb456006237d530ab4baa1 registry.k8s.io/e2e-test-images/nginx:1.14-4],SizeBytes:6978614,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 12 03:03:04.844: INFO: Logging kubelet events for node i-0747e3c6943e05486 Jan 12 03:03:04.943: INFO: Logging pods the kubelet thinks is on node i-0747e3c6943e05486 Jan 12 03:03:05.045: INFO: cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-nb46v started at 2023-01-12 02:56:01 +0000 UTC (0+1 container statuses recorded) Jan 12 03:03:05.045: INFO: Container cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 ready: true, restart count 0 Jan 12 03:03:05.045: INFO: cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-mxkb7 started at 2023-01-12 02:56:01 +0000 UTC (0+1 container statuses recorded) Jan 12 03:03:05.045: INFO: Container cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 ready: true, restart count 0 Jan 12 03:03:05.045: INFO: cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-nt8r6 started at 2023-01-12 02:56:01 +0000 UTC (0+1 container statuses recorded) Jan 12 03:03:05.045: INFO: Container cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 ready: true, restart count 0 Jan 12 03:03:05.045: INFO: cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-tdmz9 started at 2023-01-12 02:56:00 +0000 UTC (0+1 container statuses recorded) Jan 12 03:03:05.045: INFO: Container cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 ready: true, restart count 0 Jan 12 03:03:05.045: INFO: cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-rgvr5 started at 2023-01-12 02:56:01 +0000 UTC (0+1 container statuses recorded) Jan 12 03:03:05.045: INFO: Container cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 ready: true, restart count 0 Jan 12 03:03:05.045: INFO: cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-vqkq5 started at 2023-01-12 02:56:01 +0000 UTC (0+1 container statuses recorded) Jan 12 03:03:05.045: INFO: Container cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 ready: true, restart count 0 Jan 12 03:03:05.045: INFO: cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-nqbtf started at 2023-01-12 02:56:01 +0000 UTC (0+1 container statuses recorded) Jan 12 03:03:05.045: INFO: Container cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 ready: true, restart count 0 Jan 12 03:03:05.045: INFO: cilium-4s86z started at 2023-01-12 02:34:48 +0000 UTC (1+1 container statuses recorded) Jan 12 03:03:05.045: INFO: Init container clean-cilium-state ready: true, restart count 0 Jan 12 03:03:05.045: INFO: Container cilium-agent ready: true, restart count 0 Jan 12 03:03:05.045: INFO: cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-x8kxx started at 2023-01-12 02:56:01 +0000 UTC (0+1 container statuses recorded) Jan 12 03:03:05.045: INFO: Container cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 ready: true, restart count 0 Jan 12 03:03:05.045: INFO: cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-vfhxr started at 2023-01-12 02:56:01 +0000 UTC (0+1 container statuses recorded) Jan 12 03:03:05.045: INFO: Container cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 ready: true, restart count 0 Jan 12 03:03:05.045: INFO: ebs-csi-node-rs8xs started at 2023-01-12 02:34:48 +0000 UTC (0+3 container statuses recorded) Jan 12 03:03:05.045: INFO: Container ebs-plugin ready: true, restart count 0 Jan 12 03:03:05.045: INFO: Container liveness-probe ready: true, restart count 0 Jan 12 03:03:05.045: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 12 03:03:05.045: INFO: cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-x6pvg started at 2023-01-12 02:56:01 +0000 UTC (0+1 container statuses recorded) Jan 12 03:03:05.045: INFO: Container cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 ready: true, restart count 0 Jan 12 03:03:05.440: INFO: Latency metrics for node i-0747e3c6943e05486 Jan 12 03:03:05.440: INFO: Logging node info for node i-087271e72eee5713b Jan 12 03:03:05.537: INFO: Node Info: &Node{ObjectMeta:{i-087271e72eee5713b a7a69fc2-8177-4d34-98f2-9738138ef772 50298 0 2023-01-12 02:33:09 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:c5.large beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:eu-west-2 failure-domain.beta.kubernetes.io/zone:eu-west-2a kops.k8s.io/kops-controller-pki: kubernetes.io/arch:amd64 kubernetes.io/hostname:i-087271e72eee5713b kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node.kubernetes.io/exclude-from-external-load-balancers: node.kubernetes.io/instance-type:c5.large topology.ebs.csi.aws.com/zone:eu-west-2a topology.kubernetes.io/region:eu-west-2 topology.kubernetes.io/zone:eu-west-2a] map[alpha.kubernetes.io/provided-node-ip:172.20.41.237 csi.volume.kubernetes.io/nodeid:{"ebs.csi.aws.com":"i-087271e72eee5713b"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-12 02:33:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:alpha.kubernetes.io/provided-node-ip":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}}} } {protokube Update v1 2023-01-12 02:33:37 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kops.k8s.io/kops-controller-pki":{},"f:node-role.kubernetes.io/control-plane":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}} } {kube-controller-manager Update v1 2023-01-12 02:33:52 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.0.0/24\"":{}}}} } {aws-cloud-controller-manager Update v1 2023-01-12 02:33:57 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:beta.kubernetes.io/instance-type":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:taints":{}}} } {aws-cloud-controller-manager Update v1 2023-01-12 02:33:57 +0000 UTC FieldsV1 {"f:status":{"f:addresses":{"k:{\"type\":\"ExternalDNS\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"ExternalIP\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"Hostname\"}":{"f:address":{}},"k:{\"type\":\"InternalDNS\"}":{".":{},"f:address":{},"f:type":{}}}}} status} {kubelet Update v1 2023-01-12 03:01:14 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.0.0/24,DoNotUseExternalID:,ProviderID:aws:///eu-west-2a/i-087271e72eee5713b,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/control-plane,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[100.96.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{50478186496 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3918811136 0} {<nil>} 3826964Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{45430367772 0} {<nil>} 45430367772 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3813953536 0} {<nil>} 3724564Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-12 03:01:14 +0000 UTC,LastTransitionTime:2023-01-12 02:33:03 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-12 03:01:14 +0000 UTC,LastTransitionTime:2023-01-12 02:33:03 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-12 03:01:14 +0000 UTC,LastTransitionTime:2023-01-12 02:33:03 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-12 03:01:14 +0000 UTC,LastTransitionTime:2023-01-12 02:33:52 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.41.237,},NodeAddress{Type:ExternalIP,Address:13.40.44.188,},NodeAddress{Type:InternalDNS,Address:i-087271e72eee5713b.eu-west-2.compute.internal,},NodeAddress{Type:Hostname,Address:i-087271e72eee5713b.eu-west-2.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-13-40-44-188.eu-west-2.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec2684ed0ebff6bd7e100dbb20fe020a,SystemUUID:ec2684ed-0ebf-f6bd-7e10-0dbb20fe020a,BootID:6d3aa6dc-d3d9-4719-ab22-415491ec2425,KernelVersion:4.19.0-23-cloud-amd64,OSImage:Debian GNU/Linux 10 (buster),ContainerRuntimeVersion:containerd://1.6.15,KubeletVersion:v1.26.0,KubeProxyVersion:v1.26.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/etcdadm/etcd-manager@sha256:66a453db625abb268f4b3bbefc5a34a171d81e6e8796cecca54cfd71775c77c4 registry.k8s.io/etcdadm/etcd-manager:v3.0.20221209],SizeBytes:231502799,},ContainerImage{Names:[quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5 quay.io/cilium/cilium:v1.12.5],SizeBytes:166719855,},ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.26.0],SizeBytes:135162323,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.26.0],SizeBytes:124991801,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.26.0],SizeBytes:67205320,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.26.0],SizeBytes:57657656,},ContainerImage{Names:[registry.k8s.io/kops/kops-controller:1.26.0-beta.1],SizeBytes:42982334,},ContainerImage{Names:[registry.k8s.io/kops/dns-controller:1.26.0-beta.1],SizeBytes:42804914,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:f0c5de192d832e7c1daa6580d4a62e8fa6fc8eabc0917ae4cb7ed4d15e95b59e registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.1],SizeBytes:29725845,},ContainerImage{Names:[quay.io/cilium/operator@sha256:a6d24a006a6b92967ac90786b49bc1ac26e5477cf028cd1186efcfc2466484db quay.io/cilium/operator:v1.12.5],SizeBytes:26802430,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:122bfb8c1edabb3c0edd63f06523e6940d958d19b3957dc7b1d6f81e9f1f6119 registry.k8s.io/sig-storage/csi-provisioner:v3.1.0],SizeBytes:23345856,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:9ebbf9f023e7b41ccee3d52afe39a89e3ddacdbb69269d583abfc25847cfd9e4 registry.k8s.io/sig-storage/csi-resizer:v1.4.0],SizeBytes:22381475,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:8b9c313c05f54fb04f8d430896f5f5904b6cb157df261501b29adc04d2b2dc7b registry.k8s.io/sig-storage/csi-attacher:v3.4.0],SizeBytes:22085298,},ContainerImage{Names:[registry.k8s.io/provider-aws/cloud-controller-manager@sha256:fdeb61e3e42ecd9cca868d550ebdb88dd6341d9e91fcfa9a37e227dab2ad22cb registry.k8s.io/provider-aws/cloud-controller-manager:v1.26.0],SizeBytes:20154862,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:8240918,},ContainerImage{Names:[registry.k8s.io/kops/kube-apiserver-healthcheck:1.26.0-beta.1],SizeBytes:4967334,},ContainerImage{Names:[registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 12 03:03:05.538: INFO: Logging kubelet events for node i-087271e72eee5713b Jan 12 03:03:05.637: INFO: Logging pods the kubelet thinks is on node i-087271e72eee5713b Jan 12 03:03:05.740: INFO: kube-scheduler-i-087271e72eee5713b started at 2023-01-12 02:32:35 +0000 UTC (0+1 container statuses recorded) Jan 12 03:03:05.740: INFO: Container kube-scheduler ready: true, restart count 0 Jan 12 03:03:05.740: INFO: etcd-manager-events-i-087271e72eee5713b started at 2023-01-12 02:32:35 +0000 UTC (0+1 container statuses recorded) Jan 12 03:03:05.740: INFO: Container etcd-manager ready: true, restart count 0 Jan 12 03:03:05.740: INFO: kube-apiserver-i-087271e72eee5713b started at 2023-01-12 02:32:35 +0000 UTC (0+2 container statuses recorded) Jan 12 03:03:05.740: INFO: Container healthcheck ready: true, restart count 0 Jan 12 03:03:05.740: INFO: Container kube-apiserver ready: true, restart count 1 Jan 12 03:03:05.740: INFO: kube-controller-manager-i-087271e72eee5713b started at 2023-01-12 02:32:35 +0000 UTC (0+1 container statuses recorded) Jan 12 03:03:05.740: INFO: Container kube-controller-manager ready: true, restart count 2 Jan 12 03:03:05.740: INFO: cilium-wjxb5 started at 2023-01-12 02:33:37 +0000 UTC (1+1 container statuses recorded) Jan 12 03:03:05.740: INFO: Init container clean-cilium-state ready: true, restart count 0 Jan 12 03:03:05.740: INFO: Container cilium-agent ready: true, restart count 1 Jan 12 03:03:05.740: INFO: aws-cloud-controller-manager-h98xx started at 2023-01-12 02:33:38 +0000 UTC (0+1 container statuses recorded) Jan 12 03:03:05.740: INFO: Container aws-cloud-controller-manager ready: true, restart count 0 Jan 12 03:03:05.740: INFO: cilium-operator-6f8d6c4bf-x8z6b started at 2023-01-12 02:33:38 +0000 UTC (0+1 container statuses recorded) Jan 12 03:03:05.740: INFO: Container cilium-operator ready: true, restart count 1 Jan 12 03:03:05.740: INFO: etcd-manager-main-i-087271e72eee5713b started at 2023-01-12 02:32:35 +0000 UTC (0+1 container statuses recorded) Jan 12 03:03:05.740: INFO: Container etcd-manager ready: true, restart count 0 Jan 12 03:03:05.740: INFO: ebs-csi-node-kmf4s started at 2023-01-12 02:33:37 +0000 UTC (0+3 container statuses recorded) Jan 12 03:03:05.740: INFO: Container ebs-plugin ready: true, restart count 0 Jan 12 03:03:05.740: INFO: Container liveness-probe ready: true, restart count 0 Jan 12 03:03:05.740: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 12 03:03:05.740: INFO: kops-controller-rznh8 started at 2023-01-12 02:33:38 +0000 UTC (0+1 container statuses recorded) Jan 12 03:03:05.740: INFO: Container kops-controller ready: true, restart count 0 Jan 12 03:03:05.740: INFO: dns-controller-56cf649474-hp8f9 started at 2023-01-12 02:33:38 +0000 UTC (0+1 container statuses recorded) Jan 12 03:03:05.740: INFO: Container dns-controller ready: true, restart count 0 Jan 12 03:03:05.740: INFO: ebs-csi-controller-89bddb897-vs5kq started at 2023-01-12 02:33:38 +0000 UTC (0+5 container statuses recorded) Jan 12 03:03:05.740: INFO: Container csi-attacher ready: true, restart count 0 Jan 12 03:03:05.740: INFO: Container csi-provisioner ready: true, restart count 0 Jan 12 03:03:05.740: INFO: Container csi-resizer ready: true, restart count 0 Jan 12 03:03:05.740: INFO: Container ebs-plugin ready: true, restart count 0 Jan 12 03:03:05.740: INFO: Container liveness-probe ready: true, restart count 0 Jan 12 03:03:06.084: INFO: Latency metrics for node i-087271e72eee5713b Jan 12 03:03:06.084: INFO: Logging node info for node i-09933795397c99cb9 Jan 12 03:03:06.181: INFO: Node Info: &Node{ObjectMeta:{i-09933795397c99cb9 5072dbb3-1bab-4256-ba82-f5b4613ca0ad 50714 0 2023-01-12 02:34:46 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:eu-west-2 failure-domain.beta.kubernetes.io/zone:eu-west-2a kubernetes.io/arch:amd64 kubernetes.io/hostname:i-09933795397c99cb9 kubernetes.io/os:linux node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:eu-west-2a topology.hostpath.csi/node:i-09933795397c99cb9 topology.kubernetes.io/region:eu-west-2 topology.kubernetes.io/zone:eu-west-2a] map[alpha.kubernetes.io/provided-node-ip:172.20.63.113 csi.volume.kubernetes.io/nodeid:{"ebs.csi.aws.com":"i-09933795397c99cb9"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{aws-cloud-controller-manager Update v1 2023-01-12 02:34:46 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:beta.kubernetes.io/instance-type":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {aws-cloud-controller-manager Update v1 2023-01-12 02:34:46 +0000 UTC FieldsV1 {"f:status":{"f:addresses":{"k:{\"type\":\"ExternalDNS\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"ExternalIP\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"Hostname\"}":{"f:address":{}},"k:{\"type\":\"InternalDNS\"}":{".":{},"f:address":{},"f:type":{}}}}} status} {kops-controller Update v1 2023-01-12 02:34:46 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:node-role.kubernetes.io/node":{}}}} } {kube-controller-manager Update v1 2023-01-12 02:34:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.3.0/24\"":{}}}} } {kubelet Update v1 2023-01-12 02:34:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:alpha.kubernetes.io/provided-node-ip":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}}} } {kubelet Update v1 2023-01-12 03:02:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.3.0/24,DoNotUseExternalID:,ProviderID:aws:///eu-west-2a/i-09933795397c99cb9,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{50478186496 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4078190592 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{45430367772 0} {<nil>} 45430367772 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3973332992 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-12 03:02:29 +0000 UTC,LastTransitionTime:2023-01-12 02:34:46 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-12 03:02:29 +0000 UTC,LastTransitionTime:2023-01-12 02:34:46 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-12 03:02:29 +0000 UTC,LastTransitionTime:2023-01-12 02:34:46 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-12 03:02:29 +0000 UTC,LastTransitionTime:2023-01-12 02:34:58 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.63.113,},NodeAddress{Type:ExternalIP,Address:35.176.231.20,},NodeAddress{Type:InternalDNS,Address:i-09933795397c99cb9.eu-west-2.compute.internal,},NodeAddress{Type:Hostname,Address:i-09933795397c99cb9.eu-west-2.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-35-176-231-20.eu-west-2.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec252ab9650339207a7903fc9d22ecb9,SystemUUID:ec252ab9-6503-3920-7a79-03fc9d22ecb9,BootID:76ee20c7-10d2-499b-9d10-c7543451f6da,KernelVersion:4.19.0-23-cloud-amd64,OSImage:Debian GNU/Linux 10 (buster),ContainerRuntimeVersion:containerd://1.6.15,KubeletVersion:v1.26.0,KubeProxyVersion:v1.26.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5 quay.io/cilium/cilium:v1.12.5],SizeBytes:166719855,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.26.0],SizeBytes:67205320,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nautilus@sha256:80ba6c8c44f9623f06e868a1aa66026c8ec438ad814f9ec95e9333b415fe3550 registry.k8s.io/e2e-test-images/nautilus:1.7],SizeBytes:49641698,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:3fe7acf013d1264ffded116b80a73dc129a449b0fccdb8d21af8279f2233f36e registry.k8s.io/e2e-test-images/httpd:2.4.39-4],SizeBytes:41901587,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:f0c5de192d832e7c1daa6580d4a62e8fa6fc8eabc0917ae4cb7ed4d15e95b59e registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.1],SizeBytes:29725845,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nonroot@sha256:ee9f50b3c64b174d296d91ca9f69a914ac30e59095dfb462b2b518ad28a63655 registry.k8s.io/e2e-test-images/nonroot:1.4],SizeBytes:17747885,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:8240918,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:5c99cf6a02adda929b10321dbf4ecfa00d87be9ba4fb456006237d530ab4baa1 registry.k8s.io/e2e-test-images/nginx:1.14-4],SizeBytes:6978614,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 12 03:03:06.181: INFO: Logging kubelet events for node i-09933795397c99cb9 Jan 12 03:03:06.281: INFO: Logging pods the kubelet thinks is on node i-09933795397c99cb9 Jan 12 03:03:06.385: INFO: ebs-csi-node-qbbzk started at 2023-01-12 02:34:47 +0000 UTC (0+3 container statuses recorded) Jan 12 03:03:06.385: INFO: Container ebs-plugin ready: true, restart count 0 Jan 12 03:03:06.385: INFO: Container liveness-probe ready: true, restart count 0 Jan 12 03:03:06.385: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 12 03:03:06.385: INFO: cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-x5w6b started at 2023-01-12 02:56:00 +0000 UTC (0+1 container statuses recorded) Jan 12 03:03:06.385: INFO: Container cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 ready: true, restart count 0 Jan 12 03:03:06.385: INFO: cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-tpcpj started at 2023-01-12 02:56:01 +0000 UTC (0+1 container statuses recorded) Jan 12 03:03:06.385: INFO: Container cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 ready: true, restart count 0 Jan 12 03:03:06.385: INFO: cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-78cx5 started at 2023-01-12 02:56:01 +0000 UTC (0+1 container statuses recorded) Jan 12 03:03:06.385: INFO: Container cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 ready: true, restart count 0 Jan 12 03:03:06.385: INFO: cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-rklf2 started at 2023-01-12 02:56:01 +0000 UTC (0+1 container statuses recorded) Jan 12 03:03:06.385: INFO: Container cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 ready: true, restart count 0 Jan 12 03:03:06.385: INFO: cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-lr8q4 started at 2023-01-12 02:56:01 +0000 UTC (0+1 container statuses recorded) Jan 12 03:03:06.385: INFO: Container cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 ready: true, restart count 0 Jan 12 03:03:06.385: INFO: cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-sfnct started at 2023-01-12 02:56:01 +0000 UTC (0+1 container statuses recorded) Jan 12 03:03:06.385: INFO: Container cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 ready: true, restart count 0 Jan 12 03:03:06.385: INFO: cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-s688f started at 2023-01-12 02:56:01 +0000 UTC (0+1 container statuses recorded) Jan 12 03:03:06.385: INFO: Container cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 ready: true, restart count 0 Jan 12 03:03:06.385: INFO: cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-jj5bm started at 2023-01-12 02:56:01 +0000 UTC (0+1 container statuses recorded) Jan 12 03:03:06.385: INFO: Container cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 ready: true, restart count 0 Jan 12 03:03:06.385: INFO: cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-4cs8v started at 2023-01-12 02:56:01 +0000 UTC (0+1 container statuses recorded) Jan 12 03:03:06.385: INFO: Container cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 ready: true, restart count 0 Jan 12 03:03:06.385: INFO: cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5-x4sh5 started at 2023-01-12 02:56:01 +0000 UTC (0+1 container statuses recorded) Jan 12 03:03:06.385: INFO: Container cleanup40-c0412e85-4988-43a7-ad91-1dec22cb7ee5 ready: true, restart count 0 Jan 12 03:03:06.385: INFO: coredns-559769c974-hn5g4 started at 2023-01-12 02:35:21 +0000 UTC (0+1 container statuses recorded) Jan 12 03:03:06.385: INFO: Container coredns ready: true, restart count 0 Jan 12 03:03:06.385: INFO: cilium-gmpv4 started at 2023-01-12 02:34:47 +0000 UTC (1+1 container statuses recorded) Jan 12 03:03:06.385: INFO: Init container clean-cilium-state ready: true, restart count 0 Jan 12 03:03:06.385: INFO: Container cilium-agent ready: true, restart count 0 Jan 12 03:03:06.787: INFO: Latency metrics for node i-09933795397c99cb9 [DeferCleanup (Each)] [sig-node] kubelet tear down framework | framework.go:193 �[1mSTEP:�[0m Destroying namespace "kubelet-7880" for this suite. �[38;5;243m01/12/23 03:03:06.788�[0m
Filter through log files | View test history on testgrid
exit status 255
from junit_runner.xml
Filter through log files | View test history on testgrid
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)] volumes should store data
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with different fsgroup applied to the volume contents
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with same fsgroup applied to the volume contents
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with different fsgroup applied to the volume contents
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with same fsgroup skips ownership changes to the volume contents
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with mount options
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] volumes should store data
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (delayed binding)] topology should provision a volume and schedule a pod with AllowedTopologies
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ext4)] volumes should store data
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits should verify that all csinodes have volume limits
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (immediate binding)] topology should provision a volume and schedule a pod with AllowedTopologies
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should create read/write inline ephemeral volume
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should support expansion of pvcs created for ephemeral pvcs
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should support multiple inline ephemeral volumes
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should support two pods which have the same volume definition
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should create read-only inline ephemeral volume
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should create read/write inline ephemeral volume
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should support expansion of pvcs created for ephemeral pvcs
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should support two pods which have the same volume definition
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should create read-only inline ephemeral volume
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should create read/write inline ephemeral volume
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support expansion of pvcs created for ephemeral pvcs
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support multiple inline ephemeral volumes
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support two pods which have the same volume definition
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Generic Ephemeral-volume (default fs)] volumeLimits should verify that all csinodes have volume limits
Kubernetes e2e suite [It] [sig-api-machinery] API priority and fairness should ensure that requests can be classified by adding FlowSchema and PriorityLevelConfiguration
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceValidationRules [Privileged:ClusterAdmin] MUST NOT fail validation for create of a custom resource that satisfies the x-kubernetes-validations rules
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceValidationRules [Privileged:ClusterAdmin] MUST fail create of a custom resource definition that contains a x-kubernetes-validations rule that refers to a property that do not exist
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceValidationRules [Privileged:ClusterAdmin] MUST fail create of a custom resource definition that contains an x-kubernetes-validations rule that contains a syntax error
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceValidationRules [Privileged:ClusterAdmin] MUST fail create of a custom resource definition that contains an x-kubernetes-validations rule that exceeds the estimated cost limit
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceValidationRules [Privileged:ClusterAdmin] MUST fail create of a custom resource that exceeds the runtime cost limit for x-kubernetes-validations rule execution
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceValidationRules [Privileged:ClusterAdmin] MUST fail update of a custom resource that does not satisfy a x-kubernetes-validations transition rule
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceValidationRules [Privileged:ClusterAdmin] MUST fail validation for create of a custom resource that does not satisfy the x-kubernetes-validations rules
Kubernetes e2e suite [It] [sig-api-machinery] Discovery Custom resource should have storage version hash
Kubernetes e2e suite [It] [sig-api-machinery] Discovery should accurately determine present and missing resources
Kubernetes e2e suite [It] [sig-api-machinery] Discovery should validate PreferredVersion for each APIGroup [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Garbage collector should delete jobs and pods created by cronjob
Kubernetes e2e suite [It] [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Garbage collector should orphan pods created by rc if deleteOptions.OrphanDependents is nil
Kubernetes e2e suite [It] [sig-api-machinery] Garbage collector should support cascading deletion of custom resources
Kubernetes e2e suite [It] [sig-api-machinery] Garbage collector should support orphan deletion of custom resources
Kubernetes e2e suite [It] [sig-api-machinery] Generated clientset should create pods, set the deletionTimestamp and deletionGracePeriodSeconds of the pod
Kubernetes e2e suite [It] [sig-api-machinery] Generated clientset should create v1 cronJobs, delete cronJobs, watch cronJobs
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should apply changes to a resourcequota status [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a custom resource.
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a persistent volume claim
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a persistent volume claim with a storage class
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should manage the lifecycle of a ResourceQuota [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should verify ResourceQuota with cross namespace pod affinity scope using scope-selectors.
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Server request timeout default timeout should be used if the specified timeout in the request URL is 0s
Kubernetes e2e suite [It] [sig-api-machinery] Server request timeout should return HTTP status code 400 if the user specifies an invalid timeout in the request URL
Kubernetes e2e suite [It] [sig-api-machinery] Server request timeout the request should be served with a default timeout if the specified timeout in the request URL exceeds maximum allowed
Kubernetes e2e suite [It] [sig-api-machinery] ServerSideApply should create an applied object if it does not already exist
Kubernetes e2e suite [It] [sig-api-machinery] ServerSideApply should give up ownership of a field if forced applied by a controller
Kubernetes e2e suite [It] [sig-api-machinery] ServerSideApply should ignore conflict errors if force apply is used
Kubernetes e2e suite [It] [sig-api-machinery] ServerSideApply should not remove a field if an owner unsets the field but other managers still have ownership of the field
Kubernetes e2e suite [It] [sig-api-machinery] ServerSideApply should remove a field if it is owned but removed in the apply request
Kubernetes e2e suite [It] [sig-api-machinery] ServerSideApply should work for CRDs
Kubernetes e2e suite [It] [sig-api-machinery] ServerSideApply should work for subresources
Kubernetes e2e suite [It] [sig-api-machinery] Servers with support for API chunking should return chunks of results for list calls
Kubernetes e2e suite [It] [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Servers with support for Table transformation should return chunks of table results for list calls
Kubernetes e2e suite [It] [sig-api-machinery] Servers with support for Table transformation should return generic metadata details across all namespaces for nodes
Kubernetes e2e suite [It] [sig-api-machinery] Servers with support for Table transformation should return pod details
Kubernetes e2e suite [It] [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] client-go should negotiate watch and report errors with accept "application/json"
Kubernetes e2e suite [It] [sig-api-machinery] client-go should negotiate watch and report errors with accept "application/json,application/vnd.kubernetes.protobuf"
Kubernetes e2e suite [It] [sig-api-machinery] client-go should negotiate watch and report errors with accept "application/vnd.kubernetes.protobuf"
Kubernetes e2e suite [It] [sig-api-machinery] client-go should negotiate watch and report errors with accept "application/vnd.kubernetes.protobuf,application/json"
Kubernetes e2e suite [It] [sig-api-machinery] health handlers should contain necessary checks
Kubernetes e2e suite [It] [sig-api-machinery] server version should find the server version [Conformance]
Kubernetes e2e suite [It] [sig-apps] CronJob should be able to schedule after more than 100 missed schedule
Kubernetes e2e suite [It] [sig-apps] CronJob should delete failed finished jobs with limit of one job
Kubernetes e2e suite [It] [sig-apps] CronJob should delete successful finished jobs with limit of one successful job
Kubernetes e2e suite [It] [sig-apps] CronJob should not emit unexpected warnings
Kubernetes e2e suite [It] [sig-apps] CronJob should remove from active list jobs that have been deleted
Kubernetes e2e suite [It] [sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]
Kubernetes e2e suite [It] [sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]
Kubernetes e2e suite [It] [sig-apps] CronJob should support CronJob API operations [Conformance]
Kubernetes e2e suite [It] [sig-apps] CronJob should support timezone
Kubernetes e2e suite [It] [sig-apps] Deployment Deployment should have a working scale subresource [Conformance]
Kubernetes e2e suite [It] [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]
Kubernetes e2e suite [It] [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]
Kubernetes e2e suite [It] [sig-apps] Deployment deployment reaping should cascade to its replica sets and pods
Kubernetes e2e suite [It] [sig-apps] Deployment deployment should delete old replica sets [Conformance]
Kubernetes e2e suite [It] [sig-apps] Deployment deployment should support proportional scaling [Conformance]
Kubernetes e2e suite [It] [sig-apps] Deployment deployment should support rollover [Conformance]
Kubernetes e2e suite [It] [sig-apps] Deployment iterative rollouts should eventually progress
Kubernetes e2e suite [It] [sig-apps] Deployment should not disrupt a cloud load-balancer's connectivity during rollout
Kubernetes e2e suite [It] [sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]
Kubernetes e2e suite [It] [sig-apps] Deployment should validate Deployment Status endpoints [Conformance]
Kubernetes e2e suite [It] [sig-apps] Deployment test Deployment ReplicaSet orphaning and adoption regarding controllerRef
Kubernetes e2e suite [It] [sig-apps] DisruptionController Listing PodDisruptionBudgets for all namespaces should list and delete a collection of PodDisruptionBudgets [Conformance]
Kubernetes e2e suite [It] [sig-apps] DisruptionController evictions: enough pods, absolute => should allow an eviction
Kubernetes e2e suite [It] [sig-apps] DisruptionController evictions: enough pods, replicaSet, percentage => should allow an eviction
Kubernetes e2e suite [It] [sig-apps] DisruptionController evictions: maxUnavailable allow single eviction, percentage => should allow an eviction
Kubernetes e2e suite [It] [sig-apps] DisruptionController evictions: no PDB => should allow an eviction
Kubernetes e2e suite [It] [sig-apps] DisruptionController evictions: too few pods, absolute => should not allow an eviction
Kubernetes e2e suite [It] [sig-apps] DisruptionController should block an eviction until the PDB is updated to allow it [Conformance]
Kubernetes e2e suite [It] [sig-apps] DisruptionController should create a PodDisruptionBudget [Conformance]
Kubernetes e2e suite [It] [sig-apps] DisruptionController should observe PodDisruptionBudget status updated [Conformance]
Kubernetes e2e suite [It] [sig-apps] DisruptionController should observe that the PodDisruptionBudget status is not updated for unmanaged pods
Kubernetes e2e suite [It] [sig-apps] DisruptionController should update/patch PodDisruptionBudget status [Conformance]
Kubernetes e2e suite [It] [sig-apps] Job Using a pod failure policy to not count some failures towards the backoffLimit Ignore DisruptionTarget condition
Kubernetes e2e suite [It] [sig-apps] Job Using a pod failure policy to not count some failures towards the backoffLimit Ignore exit code 137
Kubernetes e2e suite [It] [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]
Kubernetes e2e suite [It] [sig-apps] Job should allow to use the pod failure policy on exit code to fail the job early
Kubernetes e2e suite [It] [sig-apps] Job should allow to use the pod failure policy to not count the failure towards the backoffLimit
Kubernetes e2e suite [It] [sig-apps] Job should apply changes to a job status [Conformance]
Kubernetes e2e suite [It] [sig-apps] Job should create pods for an Indexed job with completion indexes and specified hostname [Conformance]
Kubernetes e2e suite [It] [sig-apps] Job should delete a job [Conformance]
Kubernetes e2e suite [It] [sig-apps] Job should delete pods when suspended
Kubernetes e2e suite [It] [sig-apps] Job should fail to exceed backoffLimit
Kubernetes e2e suite [It] [sig-apps] Job should fail when exceeds active deadline
Kubernetes e2e suite [It] [sig-apps] Job should manage the lifecycle of a job [Conformance]
Kubernetes e2e suite [It] [sig-apps] Job should not create pods when created in suspend state
Kubernetes e2e suite [It] [sig-apps] Job should remove pods when job is deleted
Kubernetes e2e suite [It] [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
Kubernetes e2e suite [It] [sig-apps] Job should run a job to completion when tasks sometimes fail and are not locally restarted
Kubernetes e2e suite [It] [sig-apps] Job should run a job to completion when tasks succeed
Kubernetes e2e suite [It] [sig-apps] ReplicaSet Replace and Patch tests [Conformance]
Kubernetes e2e suite [It] [sig-apps] ReplicaSet Replicaset should have a working scale subresource [Conformance]
Kubernetes e2e suite [It] [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]
Kubernetes e2e suite [It] [sig-apps] ReplicaSet should list and delete a collection of ReplicaSets [Conformance]
Kubernetes e2e suite [It] [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]
Kubernetes e2e suite [It] [sig-apps] ReplicaSet should surface a failure condition on a common issue like exceeded quota
Kubernetes e2e suite [It] [sig-apps] ReplicaSet should validate Replicaset Status endpoints [Conformance]
Kubernetes e2e suite [It] [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]
Kubernetes e2e suite [It] [sig-apps] ReplicationController should get and update a ReplicationController scale [Conformance]
Kubernetes e2e suite [It] [sig-apps] ReplicationController should release no longer matching pods [Conformance]
Kubernetes e2e suite [It] [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]
Kubernetes e2e suite [It] [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]
Kubernetes e2e suite [It] [sig-apps] ReplicationController should test the lifecycle of a ReplicationController [Conformance]
Kubernetes e2e suite [It] [sig-apps] StatefulSet AvailableReplicas should get updated accordingly when MinReadySeconds is enabled
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should adopt matching orphans and release non-matching pods
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should implement legacy replacement when the update strategy is OnDelete
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should list, patch and delete a collection of StatefulSets [Conformance]
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should not deadlock when a pod's predecessor fails
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications with PVCs
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should provide basic identity
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should validate Statefulset Status endpoints [Conformance]
Kubernetes e2e suite [It] [sig-apps] StatefulSet MinReadySeconds should be honored when enabled
Kubernetes e2e suite [It] [sig-apps] TTLAfterFinished job should be deleted once it finishes after TTL seconds
Kubernetes e2e suite [It] [sig-architecture] Conformance Tests should have at least two untainted nodes [Conformance]
Kubernetes e2e suite [It] [sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]
Kubernetes e2e suite [It] [sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR
Kubernetes e2e suite [It] [sig-auth] ServiceAccounts ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance]
Kubernetes e2e suite [It] [sig-auth] ServiceAccounts no secret-based service account token should be auto-generated
Kubernetes e2e suite [It] [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]
Kubernetes e2e suite [It] [sig-auth] ServiceAccounts should guarantee kube-root-ca.crt exist in any namespace [Conformance]
Kubernetes e2e suite [It] [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]
Kubernetes e2e suite [It] [sig-auth] ServiceAccounts should mount projected service account token [Conformance]
Kubernetes e2e suite [It] [sig-auth] ServiceAccounts should run through the lifecycle of a ServiceAccount [Conformance]
Kubernetes e2e suite [It] [sig-auth] ServiceAccounts should set ownership and permission when RunAsUser or FsGroup is present [LinuxOnly] [NodeFeature:FSGroup]
Kubernetes e2e suite [It] [sig-auth] ServiceAccounts should update a ServiceAccount [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl Port forwarding With a server listening on 0.0.0.0 should support forwarding over websockets
Kubernetes e2e suite [It] [sig-cli] Kubectl Port forwarding With a server listening on 0.0.0.0 that expects NO client request should support a client that connects, sends DATA, and disconnects
Kubernetes e2e suite [It] [sig-cli] Kubectl Port forwarding With a server listening on 0.0.0.0 that expects a client request should support a client that connects, sends DATA, and disconnects
Kubernetes e2e suite [It] [sig-cli] Kubectl Port forwarding With a server listening on 0.0.0.0 that expects a client request should support a client that connects, sends NO DATA, and disconnects
Kubernetes e2e suite [It] [sig-cli] Kubectl Port forwarding With a server listening on localhost should support forwarding over websockets
Kubernetes e2e suite [It] [sig-cli] Kubectl Port forwarding With a server listening on localhost that expects NO client request should support a client that connects, sends DATA, and disconnects
Kubernetes e2e suite [It] [sig-cli] Kubectl Port forwarding With a server listening on localhost that expects a client request should support a client that connects, sends DATA, and disconnects
Kubernetes e2e suite [It] [sig-cli] Kubectl Port forwarding With a server listening on localhost that expects a client request should support a client that connects, sends NO DATA, and disconnects
Kubernetes e2e suite [It] [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl apply apply set/view last-applied
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl apply should apply a new configuration to an existing RC
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl apply should reuse port when apply to an existing SVC
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl cluster-info dump should check if cluster-info dump succeeds
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes control plane services is included in cluster-info [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl copy should copy a file from a running Pod
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl create quota should create a quota with scopes
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl create quota should create a quota without scopes
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl create quota should reject quota with invalid scopes
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for cronjob
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl events should show event when pod is created
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl get componentstatuses should get componentstatuses
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl validation should create/apply a CR with unknown fields for CRD with no validation schema
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl validation should create/apply a valid CR for CRD with validation schema
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl validation should create/apply an invalid/valid CR with arbitrary-extra properties for CRD with partially-specified validation schema
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl validation should detect unknown metadata fields in both the root and embedded object of a CR
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl validation should detect unknown metadata fields of a typed object
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Simple pod should contain last line of the log
Kubernetes e2e suite [It] [sig-cli] Kubectl client Simple pod should handle in-cluster config
Kubernetes e2e suite [It] [sig-cli] Kubectl client Simple pod should return command exit codes execing into a container with a failing command
Kubernetes e2e suite [It] [sig-cli] Kubectl client Simple pod should return command exit codes execing into a container with a successful command
Kubernetes e2e suite [It] [sig-cli] Kubectl client Simple pod should return command exit codes running a failing command
Kubernetes e2e suite [It] [sig-cli] Kubectl client Simple pod should return command exit codes running a successful command
Kubernetes e2e suite [It] [sig-cli] Kubectl client Simple pod should support exec
Kubernetes e2e suite [It] [sig-cli] Kubectl client Simple pod should support exec through an HTTP proxy
Kubernetes e2e suite [It] [sig-cli] Kubectl client Simple pod should support exec through kubectl proxy
Kubernetes e2e suite [It] [sig-cli] Kubectl client Simple pod should support exec using resource/name
Kubernetes e2e suite [It] [sig-cli] Kubectl client Simple pod should support inline execution and attach
Kubernetes e2e suite [It] [sig-cli] Kubectl client Simple pod should support port-forward
Kubernetes e2e suite [It] [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client kubectl wait should ignore not found error with --for=delete
Kubernetes e2e suite [It] [sig-instrumentation] Events API should delete a collection of events [Conformance]
Kubernetes e2e suite [It] [sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]
Kubernetes e2e suite [It] [sig-instrumentation] Events should delete a collection of events [Conformance]
Kubernetes e2e suite [It] [sig-instrumentation] Events should manage the lifecycle of an event [Conformance]
Kubernetes e2e suite [It] [sig-instrumentation] MetricsGrabber should grab all metrics from API server.
Kubernetes e2e suite [It] [sig-instrumentation] MetricsGrabber should grab all metrics from a ControllerManager.
Kubernetes e2e suite [It] [sig-instrumentation] MetricsGrabber should grab all metrics from a Kubelet.
Kubernetes e2e suite [It] [sig-instrumentation] MetricsGrabber should grab all metrics from a Scheduler.
Kubernetes e2e suite [It] [sig-network] CVE-2021-29923 IPv4 Service Type ClusterIP with leading zeros should work interpreted as decimal
Kubernetes e2e suite [It] [sig-network] Conntrack should be able to preserve UDP traffic when initial unready endpoints get ready
Kubernetes e2e suite [It] [sig-network] Conntrack should be able to preserve UDP traffic when server pod cycles for a ClusterIP service
Kubernetes e2e suite [It] [sig-network] Conntrack should be able to preserve UDP traffic when server pod cycles for a NodePort service
Kubernetes e2e suite [It] [sig-network] Conntrack should drop INVALID conntrack entries [Privileged]
Kubernetes e2e suite [It] [sig-network] DNS HostNetwork should resolve DNS of partial qualified names for services on hostNetwork pods with dnsPolicy: ClusterFirstWithHostNet [LinuxOnly]
Kubernetes e2e suite [It] [sig-network] DNS should provide /etc/hosts entries for the cluster [Conformance]
Kubernetes e2e suite [It] [sig-network] DNS should provide DNS for ExternalName services [Conformance]
Kubernetes e2e suite [It] [sig-network] DNS should provide DNS for pods for Hostname [Conformance]
Kubernetes e2e suite [It] [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]
Kubernetes e2e suite [It] [sig-network] DNS should provide DNS for services [Conformance]
Kubernetes e2e suite [It] [sig-network] DNS should provide DNS for the cluster [Conformance]
Kubernetes e2e suite [It] [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
Kubernetes e2e suite [It] [sig-network] DNS should resolve DNS of partial qualified names for the cluster [LinuxOnly]
Kubernetes e2e suite [It] [sig-network] DNS should support configurable pod DNS nameservers [Conformance]
Kubernetes e2e suite [It] [sig-network] DNS should support configurable pod resolv.conf
Kubernetes e2e suite [It] [sig-network] DNS should work with the pod containing more than 6 DNS search paths and longer than 256 search list characters
Kubernetes e2e suite [It] [sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]
Kubernetes e2e suite [It] [sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]
Kubernetes e2e suite [It] [sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]
Kubernetes e2e suite [It] [sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance]
Kubernetes e2e suite [It] [sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]
Kubernetes e2e suite [It] [sig-network] Ingress API should support creating Ingress API operations [Conformance]
Kubernetes e2e suite [It] [sig-network] IngressClass API should support creating IngressClass API operations [Conformance]
Kubernetes e2e suite [It] [sig-network] Netpol API should support creating NetworkPolicy API operations
Kubernetes e2e suite [It] [sig-network] Netpol API should support creating NetworkPolicy API with endport field
Kubernetes e2e suite [It] [sig-network] NetworkPolicy API should support creating NetworkPolicy API operations
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-network] Networking should provide unchanging, static URL paths for kubernetes api services
Kubernetes e2e suite [It] [sig-network] Proxy version v1 A set of valid responses are returned for both pod and service Proxy [Conformance]
Kubernetes e2e suite [It] [sig-network] Proxy version v1 A set of valid responses are returned for both pod and service ProxyWithPath [Conformance]
Kubernetes e2e suite [It] [sig-network] Proxy version v1 should proxy logs on node using proxy subresource
Kubernetes e2e suite [It] [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource
Kubernetes e2e suite [It] [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance]
Kubernetes e2e suite [It] [sig-network] SCTP [LinuxOnly] should allow creating a basic SCTP service with pod and endpoints
Kubernetes e2e suite [It] [sig-network] Service endpoints latency should not be very high [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should allow pods to hairpin back to themselves through services
Kubernetes e2e suite [It] [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should be able to create a functioning NodePort service [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should be able to up and down services
Kubernetes e2e suite [It] [sig-network] Services should be updated after adding or deleting ports
Kubernetes e2e suite [It] [sig-network] Services should check NodePort out-of-range
Kubernetes e2e suite [It] [sig-network] Services should complete a service status lifecycle [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should delete a collection of services [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should find a service from listing all namespaces [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly]
Kubernetes e2e suite [It] [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly]
Kubernetes e2e suite [It] [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should implement service.kubernetes.io/headless
Kubernetes e2e suite [It] [sig-network] Services should implement service.kubernetes.io/service-proxy-name
Kubernetes e2e suite [It] [sig-network] Services should not be able to connect to terminating and unready endpoints if PublishNotReadyAddresses is false
Kubernetes e2e suite [It] [sig-network] Services should preserve source pod IP for traffic thru service cluster IP [LinuxOnly]
Kubernetes e2e suite [It] [sig-network] Services should prevent NodePort collisions
Kubernetes e2e suite [It] [sig-network] Services should provide secure master service [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should release NodePorts on delete
Kubernetes e2e suite [It] [sig-network] Services should serve a basic endpoint from pods [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should serve endpoints on same port and different protocol for internal traffic on Type LoadBalancer
Kubernetes e2e suite [It] [sig-network] Services should serve multiport endpoints from pods [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should test the lifecycle of an Endpoint [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should work after the service has been recreated
Kubernetes e2e suite [It] [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]
Kubernetes e2e suite [It] [sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]
Kubernetes e2e suite [It] [sig-node] ConfigMap should update ConfigMap successfully
Kubernetes e2e suite [It] [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart https hook properly [MinimumKubeletVersion:1.23] [NodeConformance]
Kubernetes e2e suite [It] [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop https hook properly [MinimumKubeletVersion:1.23] [NodeConformance]
Kubernetes e2e suite [It] [sig-node] Container Runtime blackbox test on terminated container should report termination message as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Container Runtime blackbox test on terminated container should report termination message from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Container Runtime blackbox test on terminated container should report termination message from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Container Runtime blackbox test on terminated container should report termination message if TerminationMessagePath is set [NodeConformance]
Kubernetes e2e suite [It] [sig-node] Container Runtime blackbox test on terminated container should report termination message if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Container Runtime blackbox test when running a container with a new image should be able to pull from private registry with secret [NodeConformance]
Kubernetes e2e suite [It] [sig-node] Container Runtime blackbox test when running a container with a new image should be able to pull image [NodeConformance]
Kubernetes e2e suite [It] [sig-node] Container Runtime blackbox test when running a container with a new image should not be able to pull from private registry without secret [NodeConformance]
Kubernetes e2e suite [It] [sig-node] Container Runtime blackbox test when running a container with a new image should not be able to pull image from invalid registry [NodeConformance]
Kubernetes e2e suite [It] [sig-node] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Containers should be able to override the image's default arguments (container cmd) [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Containers should be able to override the image's default command (container entrypoint) [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Downward API should provide host IP and pod IP as an env var if pod uses host network [LinuxOnly]
Kubernetes e2e suite [It] [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Ephemeral Containers [NodeConformance] will start an ephemeral container in an existing pod [Conformance]
Kubernetes e2e suite [It] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running
Kubernetes e2e suite [It] [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]
Kubernetes e2e suite [It] [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]
Kubernetes e2e suite [It] [sig-node] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
Kubernetes e2e suite [It] [sig-node] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
Kubernetes e2e suite [It] [sig-node] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Kubelet when scheduling an agnhost Pod with hostAliases should write entries to /etc/hosts [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Lease lease API should be available [Conformance]
Kubernetes e2e suite [It] [sig-node] NodeLease NodeLease should have OwnerReferences set
Kubernetes e2e suite [It] [sig-node] NodeLease NodeLease the kubelet should create and update a lease in the kube-node-lease namespace
Kubernetes e2e suite [It] [sig-node] NodeLease NodeLease the kubelet should report node status infrequently
Kubernetes e2e suite [It] [sig-node] PodOSRejection [NodeConformance] Kubelet should reject pod when the node OS doesn't match pod's OS
Kubernetes e2e suite [It] [sig-node] PodTemplates should delete a collection of pod templates [Conformance]
Kubernetes e2e suite [It] [sig-node] PodTemplates should replace a pod template [Conformance]
Kubernetes e2e suite [It] [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance]
Kubernetes e2e suite [It] [sig-node] Pods Extended Delete Grace Period should be submitted and removed
Kubernetes e2e suite [It] [sig-node] Pods Extended Pod Container Status should never report container start when an init container fails
Kubernetes e2e suite [It] [sig-node] Pods Extended Pod Container Status should never report success for a pending container
Kubernetes e2e suite [It] [sig-node] Pods Extended Pod Container lifecycle evicted pods should be terminal
Kubernetes e2e suite [It] [sig-node] Pods Extended Pod Container lifecycle should not create extra sandbox if all containers are done
Kubernetes e2e suite [It] [sig-node] Pods Extended Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]
Kubernetes e2e suite [It] [sig-node] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Pods should be submitted and removed [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Pods should be updated [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Pods should contain environment variables for services [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Pods should delete a collection of pods [Conformance]
Kubernetes e2e suite [It] [sig-node] Pods should get a host IP [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Pods should patch a pod status [Conformance]
Kubernetes e2e suite [It] [sig-node] Pods should run through the lifecycle of Pods and PodStatus [Conformance]
Kubernetes e2e suite [It] [sig-node] Pods should support pod readiness gates [NodeConformance]
Kubernetes e2e suite [It] [sig-node] Pods should support remote command execution over websockets [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] PreStop graceful pod terminated should wait until preStop hook completes the process
Kubernetes e2e suite [It] [sig-node] PreStop should call prestop when killing a pod [Conformance]
Kubernetes e2e suite [It] [sig-node] PrivilegedPod [NodeConformance] should enable privileged commands [LinuxOnly]
Kubernetes e2e suite [It] [sig-node] Probing container should *not* be restarted by liveness probe because startup probe delays it
Kubernetes e2e suite [It] [sig-node] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Probing container should *not* be restarted with a GRPC liveness probe [NodeConformance]
Kubernetes e2e suite [It] [sig-node] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Probing container should *not* be restarted with a non-local redirect http liveness probe
Kubernetes e2e suite [It] [sig-node] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Probing container should be ready immediately after startupProbe succeeds
Kubernetes e2e suite [It] [sig-node] Probing container should be restarted by liveness probe after startup probe enables it
Kubernetes e2e suite [It] [sig-node] Probing container should be restarted startup probe fails
Kubernetes e2e suite [It] [sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Probing container should be restarted with a GRPC liveness probe [NodeConformance]
Kubernetes e2e suite [It] [sig-node] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Probing container should be restarted with a failing exec liveness probe that took longer than the timeout
Kubernetes e2e suite [It] [sig-node] Probing container should be restarted with a local redirect http liveness probe
Kubernetes e2e suite [It] [sig-node] Probing container should be restarted with an exec liveness probe with timeout [MinimumKubeletVersion:1.20] [NodeConformance]
Kubernetes e2e suite [It] [sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Probing container should mark readiness on pods to false and disable liveness probes while pod is in progress of terminating
Kubernetes e2e suite [It] [sig-node] Probing container should mark readiness on pods to false while pod is in progress of terminating when a pod has a readiness probe
Kubernetes e2e suite [It] [sig-node] Probing container should not be ready with an exec readiness probe timeout [MinimumKubeletVersion:1.20] [NodeConformance]
Kubernetes e2e suite [It] [sig-node] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] RuntimeClass should support RuntimeClasses API operations [Conformance]
Kubernetes e2e suite [It] [sig-node] RuntimeClass should reject a Pod requesting a RuntimeClass with an unconfigured handler [NodeFeature:RuntimeHandler]
Kubernetes e2e suite [It] [sig-node] RuntimeClass should reject a Pod requesting a RuntimeClass with conflicting node selector
Kubernetes e2e suite [It] [sig-node] RuntimeClass should reject a Pod requesting a deleted RuntimeClass [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] RuntimeClass should reject a Pod requesting a non-existent RuntimeClass [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] RuntimeClass should schedule a Pod requesting a RuntimeClass and initialize its Overhead [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] RuntimeClass should schedule a Pod requesting a RuntimeClass without PodOverhead [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] SSH should SSH to all nodes and run commands
Kubernetes e2e suite [It] [sig-node] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Secrets should be consumable via the environment [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Secrets should fail to create secret due to empty secret key [Conformance]
Kubernetes e2e suite [It] [sig-node] Secrets should patch a secret [Conformance]
Kubernetes e2e suite [It] [sig-node] Security Context When creating a container with runAsNonRoot should not run with an explicit root user ID [LinuxOnly]
Kubernetes e2e suite [It] [sig-node] Security Context When creating a container with runAsNonRoot should not run without a specified user ID
Kubernetes e2e suite [It] [sig-node] Security Context When creating a container with runAsNonRoot should run with an explicit non-root user ID [LinuxOnly]
Kubernetes e2e suite [It] [sig-node] Security Context When creating a container with runAsNonRoot should run with an image specified user ID
Kubernetes e2e suite [It] [sig-node] Security Context When creating a container with runAsUser should run the container with uid 0 [LinuxOnly] [NodeConformance]
Kubernetes e2e suite [It] [sig-node] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Security Context When creating a pod with privileged should run the container as privileged when true [LinuxOnly] [NodeFeature:HostAccess]
Kubernetes e2e suite [It] [sig-node] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Security Context When creating a pod with readOnlyRootFilesystem should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance]
Kubernetes e2e suite [It] [sig-node] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Security Context should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]
Kubernetes e2e suite [It] [sig-node] Security Context should support container.SecurityContext.RunAsUser [LinuxOnly]
Kubernetes e2e suite [It] [sig-node] Security Context should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]
Kubernetes e2e suite [It] [sig-node] Security Context should support pod.Spec.SecurityContext.RunAsUser [LinuxOnly]
Kubernetes e2e suite [It] [sig-node] Security Context should support pod.Spec.SecurityContext.SupplementalGroups [LinuxOnly]
Kubernetes e2e suite [It] [sig-node] Security Context should support seccomp default which is unconfined [LinuxOnly]
Kubernetes e2e suite [It] [sig-node] Security Context should support seccomp runtime/default [LinuxOnly]
Kubernetes e2e suite [It] [sig-node] Security Context should support seccomp unconfined on the container [LinuxOnly]
Kubernetes e2e suite [It] [sig-node] Security Context should support seccomp unconfined on the pod [LinuxOnly]
Kubernetes e2e suite [It] [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance]
Kubernetes e2e suite [It] [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when true [LinuxOnly] [NodeConformance]
Kubernetes e2e suite [It] [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Security Context when if the container's primary UID belongs to some groups in the image [LinuxOnly] should add pod.Spec.SecurityContext.SupplementalGroups to them [LinuxOnly] in resultant supplementary groups for the container processes
Kubernetes e2e suite [It] [sig-node] Sysctls [LinuxOnly] [NodeConformance] should not launch unsafe, but not explicitly enabled sysctls on the node [MinimumKubeletVersion:1.21]
Kubernetes e2e suite [It] [sig-node] Sysctls [LinuxOnly] [NodeConformance] should reject invalid sysctls [MinimumKubeletVersion:1.21] [Conformance]
Kubernetes e2e suite [It] [sig-node] Sysctls [LinuxOnly] [NodeConformance] should support sysctls [MinimumKubeletVersion:1.21] [Conformance]
Kubernetes e2e suite [It] [sig-node] Sysctls [LinuxOnly] [NodeConformance] should support sysctls with slashes as separator [MinimumKubeletVersion:1.23]
Kubernetes e2e suite [It] [sig-node] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Variable Expansion should allow substituting values in a volume subpath [Conformance]
Kubernetes e2e suite [It] [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]
Kubernetes e2e suite [It] [sig-scheduling] LimitRange should list, patch and delete a LimitRange by collection [Conformance]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should create read-only inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should create read/write inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should support multiple inline ephemeral volumes
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should support two pods which have the same volume definition
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with pvc data source
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] provisioning should mount multiple PV pointing to the same storage on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits should verify that all csinodes have volume limits
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should create read/write inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should support expansion of pvcs created for ephemeral pvcs
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should support multiple inline ephemeral volumes
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should support two pods which have the same volume definition
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should create read-only inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should create read/write inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should support expansion of pvcs created for ephemeral pvcs
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should support two pods which have the same volume definition
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should create read-only inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should create read/write inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support expansion of pvcs created for ephemeral pvcs
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support multiple inline ephemeral volumes
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support two pods which have the same volume definition
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs)] volumeLimits should verify that all csinodes have volume limits
Kubernetes e2e suite [It] [sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should modify fsGroup if fsGroupPolicy=File
Kubernetes e2e suite [It] [sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should modify fsGroup if fsGroupPolicy=default
Kubernetes e2e suite [It] [sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should not modify fsGroup if fsGroupPolicy=None
Kubernetes e2e suite [It] [sig-storage] CSI mock volume CSI Volume expansion should expand volume by restarting pod if attach=off, nodeExpansion=on
Kubernetes e2e suite [It] [sig-storage] CSI mock volume CSI Volume expansion should expand volume by restarting pod if attach=on, nodeExpansion=on
Kubernetes e2e suite [It] [sig-storage] CSI mock volume CSI Volume expansion should expand volume without restarting pod if nodeExpansion=off
Kubernetes e2e suite [It] [sig-storage] CSI mock volume CSI Volume expansion should not expand volume if resizingOnDriver=off, resizingOnSC=on
Kubernetes e2e suite [It] [sig-storage] CSI mock volume CSI attach test using mock driver should not require VolumeAttach for drivers without attachment
Kubernetes e2e suite [It] [sig-storage] CSI mock volume CSI attach test using mock driver should preserve attachment policy when no CSIDriver present
Kubernetes e2e suite [It] [sig-storage] CSI mock volume CSI attach test using mock driver should require VolumeAttach for drivers with attachment
Kubernetes e2e suite [It] [sig-storage] CSI mock volume CSI attach test using mock driver should require VolumeAttach for ephemermal volume and drivers with attachment
Kubernetes e2e suite [It] [sig-storage] CSI mock volume CSI online volume expansion should expand volume without restarting pod if attach=off, nodeExpansion=on
Kubernetes e2e suite [It] [sig-storage] CSI mock volume CSI online volume expansion should expand volume without restarting pod if attach=on, nodeExpansion=on
Kubernetes e2e suite [It] [sig-storage] CSI mock volume CSI workload information using mock driver contain ephemeral=true when using inline volume
Kubernetes e2e suite [It] [sig-storage] CSI mock volume CSI workload information using mock driver should be passed when podInfoOnMount=true
Kubernetes e2e suite [It] [sig-storage] CSI mock volume CSI workload information using mock driver should not be passed when CSIDriver does not exist
Kubernetes e2e suite [It] [sig-storage] CSI mock volume CSI workload information using mock driver should not be passed when podInfoOnMount=false
Kubernetes e2e suite [It] [sig-storage] CSI mock volume CSI workload information using mock driver should not be passed when podInfoOnMount=nil
Kubernetes e2e suite [It] [sig-storage] CSI mock volume CSIServiceAccountToken token should be plumbed down when csiServiceAccountTokenEnabled=true
Kubernetes e2e suite [It] [sig-storage] CSI mock volume CSIServiceAccountToken token should not be plumbed down when CSIDriver is not deployed
Kubernetes e2e suite [It] [sig-storage] CSI mock volume CSIServiceAccountToken token should not be plumbed down when csiServiceAccountTokenEnabled=false
Kubernetes e2e suite [It] [sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity disabled
Kubernetes e2e suite [It] [sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity unused
Kubernetes e2e suite [It] [sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, have capacity
Kubernetes e2e suite [It] [sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, insufficient capacity
Kubernetes e2e suite [It] [sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, no capacity
Kubernetes e2e suite [It] [sig-storage] CSI mock volume Delegate FSGroup to CSI driver [LinuxOnly] should not pass FSGroup to CSI driver if it is set in pod and driver supports VOLUME_MOUNT_GROUP
Kubernetes e2e suite [It] [sig-storage] CSI mock volume Delegate FSGroup to CSI driver [LinuxOnly] should pass FSGroup to CSI driver if it is set in pod and driver supports VOLUME_MOUNT_GROUP
Kubernetes e2e suite [It] [sig-storage] CSI mock volume storage capacity exhausted, immediate binding
Kubernetes e2e suite [It] [sig-storage] CSI mock volume storage capacity exhausted, late binding, no topology
Kubernetes e2e suite [It] [sig-storage] CSI mock volume storage capacity exhausted, late binding, with topology
Kubernetes e2e suite [It] [sig-storage] CSI mock volume storage capacity unlimited
Kubernetes e2e suite [It] [sig-storage] CSIInlineVolumes should support CSIVolumeSource in Pod API [Conformance]
Kubernetes e2e suite [It] [sig-storage] CSIInlineVolumes should support ephemeral VolumeLifecycleMode in CSIDriver API [Conformance]
Kubernetes e2e suite [It] [sig-storage] CSIStorageCapacity should support CSIStorageCapacities API operations [Conformance]
Kubernetes e2e suite [It] [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] ConfigMap should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
Kubernetes e2e suite [It] [sig-storage] ConfigMap should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup]
Kubernetes e2e suite [It] [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
Kubernetes e2e suite [It] [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]
Kubernetes e2e suite [It] [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Downward API volume should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]
Kubernetes e2e suite [It] [sig-storage] Downward API volume should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup]
Kubernetes e2e suite [It] [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Dynamic Provisioning Invalid AWS KMS key should report an error and create no PV
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes pod should support memory backed volumes of specified size
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] files with FSGroup ownership should support (root,0644,tmpfs)
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] new files should be created with FSGroup ownership when container is non-root
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] new files should be created with FSGroup ownership when container is root
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] nonexistent volume subPath should have the correct mode and owner using FSGroup
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] volume on default medium should have the correct mode using FSGroup
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] volume on tmpfs should have the correct mode using FSGroup
Kubernetes e2e suite [It] [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]
Kubernetes e2e suite [It] [sig-storage] Ephemeralstorage When pod refers to non-existent ephemeral storage should allow deletion of pod with invalid volume : configmap
Kubernetes e2e suite [It] [sig-storage] Ephemeralstorage When pod refers to non-existent ephemeral storage should allow deletion of pod with invalid volume : projected
Kubernetes e2e suite [It] [sig-storage] Ephemeralstorage When pod refers to non-existent ephemeral storage should allow deletion of pod with invalid volume : secret
Kubernetes e2e suite [It] [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance]
Kubernetes e2e suite [It] [sig-storage] HostPath should support r/w [NodeConformance]
Kubernetes e2e suite [It] [sig-storage] HostPath should support subPath [NodeConformance]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support existing directory
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support existing directory
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support existing directory
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (block volmode)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (ext4)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] PV Protection Verify "immediate" deletion of a PV that is not bound to a PVC
Kubernetes e2e suite [It] [sig-storage] PV Protection Verify that PV bound to a PVC is not removed immediately
Kubernetes e2e suite [It] [sig-storage] PVC Protection Verify "immediate" deletion of a PVC that is not in active use by a pod
Kubernetes e2e suite [It] [sig-storage] PVC Protection Verify that PVC in active use by a pod is not removed immediately
Kubernetes e2e suite [It] [sig-storage] PVC Protection Verify that scheduling of a pod that uses PVC that is being deleted fails and the pod becomes Unschedulable
Kubernetes e2e suite [It] [sig-storage] PersistentVolumes-expansion loopback local block volume should support online expansion on node
Kubernetes e2e suite [It] [sig-storage] PersistentVolumes-local Pod with node different from PV's NodeAffinity should fail scheduling due to different NodeAffinity
Kubernetes e2e suite [It] [sig-storage] PersistentVolumes-local Pod with node different from PV's NodeAffinity should fail scheduling due to different NodeSelector
Kubernetes e2e suite [It] [sig-storage] PersistentVolumes-local [Volume type: block] One pod requesting one prebound PVC should be able to mount volume and write from pod1
Kubernetes e2e suite [It] [sig-storage] PersistentVolumes-local [Volume type: block] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2
Kubernetes e2e suite [It] [sig-storage] PersistentVolumes-local [Volume type: blockfswithformat] One pod requesting one prebound PVC should be able to mount volume and write from pod1
Kubernetes e2e suite [It] [sig-storage] PersistentVolumes-local [Volume type: blockfswithformat] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2
Kubernetes e2e suite [It] [sig-storage] PersistentVolumes-local [Volume type: blockfswithoutformat] One pod requesting one prebound PVC should be able to mount volume and read from pod1
Kubernetes e2e suite [It] [sig-storage] PersistentVolumes-local [Volume type: blockfswithoutformat] One pod requesting one prebound PVC should be able to mount volume and write from pod1
Kubernetes e2e suite [It] [sig-storage] PersistentVolumes-local [Volume type: blockfswithoutformat] One pod requesting one prebound PVC should be able to mount volume and write from pod1
Kubernetes e2e suite [It] [sig-storage] PersistentVolumes-local [Volume type: blockfswithoutformat] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2
Kubernetes e2e suite [It] [sig-storage] PersistentVolumes-local [Volume type: blockfswithoutformat] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2
Kubernetes e2e suite [It] [sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2
Kubernetes e2e suite [It] [sig-storage] PersistentVolumes-local [Volume type: dir-link-bindmounted] One pod requesting one prebound PVC should be able to mount volume and read from pod1
Kubernetes e2e suite [It] [sig-storage] PersistentVolumes-local [Volume type: dir-link-bindmounted] One pod requesting one prebound PVC should be able to mount volume and write from pod1
Kubernetes e2e suite [It] [sig-storage] PersistentVolumes-local [Volume type: dir-link-bindmounted] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2
Kubernetes e2e suite [It] [sig-storage] PersistentVolumes-local [Volume type: dir-link-bindmounted] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2
Kubernetes e2e suite [It] [sig-storage] PersistentVolumes-local [Volume type: dir-link-bindmounted] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2
Kubernetes e2e suite [It] [sig-storage] PersistentVolumes-local [Volume type: dir-link] One pod requesting one prebound PVC should be able to mount volume and read from pod1
Kubernetes e2e suite [It] [sig-storage] PersistentVolumes-local [Volume type: dir-link] One pod requesting one prebound PVC should be able to mount volume and read from pod1
Kubernetes e2e suite [It] [sig-storage] PersistentVolumes-local [Volume type: dir-link] One pod requesting one prebound PVC should be able to mount volume and read from pod1
Kubernetes e2e suite [It] [sig-storage] PersistentVolumes-local [Volume type: dir-link] One pod requesting one prebound PVC should be able to mount volume and write from pod1
Kubernetes e2e suite [It] [sig-storage] PersistentVolumes-local [Volume type: dir] One pod requesting one prebound PVC should be able to mount volume and read from pod1
Kubernetes e2e suite [It] [sig-storage] PersistentVolumes-local [Volume type: dir] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2
Kubernetes e2e suite [It] [sig-storage] PersistentVolumes-local [Volume type: tmpfs] One pod requesting one prebound PVC should be able to mount volume and write from pod1
Kubernetes e2e suite [It] [sig-storage] PersistentVolumes-local [Volume type: tmpfs] One pod requesting one prebound PVC should be able to mount volume and write from pod1
Kubernetes e2e suite [It] [sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2
Kubernetes e2e suite [It] [sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2
Kubernetes e2e suite [It] [sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2
Kubernetes e2e suite [It] [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected configMap should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
Kubernetes e2e suite [It] [sig-storage] Projected configMap should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup]
Kubernetes e2e suite [It] [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
Kubernetes e2e suite [It] [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected downwardAPI should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]
Kubernetes e2e suite [It] [sig-storage] Projected downwardAPI should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup]
Kubernetes e2e suite [It] [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected secret should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance]
Kubernetes e2e suite [It] [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Secrets should be immutable if `immutable` field is set [Conformance]
Kubernetes e2e suite [It] [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [Conformance]
Kubernetes e2e suite [It] [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [Conformance]
Kubernetes e2e suite [It] [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [Conformance]
Kubernetes e2e suite [It] [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [Conformance]
Kubernetes e2e suite [It] [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [Conformance]
Kubernetes e2e suite [It] [sig-storage] Subpath Container restart should verify that container can restart successfully after configmaps modified
Kubernetes e2e suite [It] [sig-storage] Volumes ConfigMap should be mountable
Kubernetes e2e suite [ReportAfterSuite] Kubernetes e2e suite report
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
kubetest2 Down
kubetest2 Up
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should create read-only inline ephemeral volume
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should create read/write inline ephemeral volume
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should support expansion of pvcs created for ephemeral pvcs
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should support multiple inline ephemeral volumes
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should support two pods which have the same volume definition
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)] provisioning should mount multiple PV pointing to the same storage on the same node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with any volume data source [Serial]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with mount options
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with pvc data source
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with pvc data source in parallel [Slow]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)] volume-stress multiple pods should access different volumes repeatedly [Slow] [Serial]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] capacity provides storage capacity information
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] provisioning should mount multiple PV pointing to the same storage on the same node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with any volume data source [Serial]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source in parallel [Slow]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] read-write-once-pod[Feature:ReadWriteOncePod] should block a second pod from using an in-use ReadWriteOncePod volume
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] read-write-once-pod[Feature:ReadWriteOncePod] should block a second pod from using an in-use ReadWriteOncePod volume on the same node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] volume-stress multiple pods should access different volumes repeatedly [Slow] [Serial]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (delayed binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ext3)] volumes should store data
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down is usable by a new pod when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down is usable by a new pod with a different SELinux context when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down is usable by a new pod when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down is usable by a new pod with a different SELinux context when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (filesystem volmode)] volume-lifecycle-performance should provision volumes at scale within performance constraints [Slow] [Serial]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits should support volume limits [Serial]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (immediate binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should mount multiple PV pointing to the same storage on the same node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with any volume data source [Serial]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with mount options
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with pvc data source
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with pvc data source in parallel [Slow]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support existing directory
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support non-existent path
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes should store data
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (xfs)][Slow] volumes should store data
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic Snapshot (delete policy)] snapshottable-stress[Feature:VolumeSnapshotDataSource] should support snapshotting of many volumes repeatedly [Slow] [Serial]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works, check deletion (ephemeral)
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic Snapshot (retain policy)] snapshottable-stress[Feature:VolumeSnapshotDataSource] should support snapshotting of many volumes repeatedly [Slow] [Serial]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works, check deletion (ephemeral)
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Ephemeral Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Ephemeral Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works, check deletion (ephemeral)
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Ephemeral Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Ephemeral Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works, check deletion (ephemeral)
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should create read-only inline ephemeral volume
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should support multiple inline ephemeral volumes
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Generic Ephemeral-volume (default fs)] volumeLimits should support volume limits [Serial]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (default fs)] subPath should support existing directory
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (default fs)] volumes should store data
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (ext3)] volumes should store data
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (ext4)] volumes should store data
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes should store data
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (xfs)][Slow] volumes should store data
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (block volmode)] volumes should store data
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (ext3)] volumes should store data
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (ext4)] volumes should store data
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down is usable by a new pod when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down is usable by a new pod with a different SELinux context when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down is usable by a new pod when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down is usable by a new pod with a different SELinux context when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes should store data
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (xfs)][Slow] volumes should store data
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works, check deletion (ephemeral)
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works, check deletion (ephemeral)
Kubernetes e2e suite [It] [sig-api-machinery] API priority and fairness should ensure that requests can't be drowned out (fairness)
Kubernetes e2e suite [It] [sig-api-machinery] API priority and fairness should ensure that requests can't be drowned out (priority)
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] [Flaky] kubectl explain works for CR with the same resource name as built-in object.
Kubernetes e2e suite [It] [sig-api-machinery] Etcd failure [Disruptive] should recover from SIGKILL
Kubernetes e2e suite [It] [sig-api-machinery] Etcd failure [Disruptive] should recover from network partition with master
Kubernetes e2e suite [It] [sig-api-machinery] Namespaces [Serial] should always delete fast (ALL of 100 namespaces in 150 seconds) [Feature:ComprehensiveNamespaceDraining]
Kubernetes e2e suite [It] [sig-api-machinery] Namespaces [Serial] should apply a finalizer to a Namespace [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Namespaces [Serial] should apply an update to a Namespace [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Namespaces [Serial] should apply changes to a namespace status [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Namespaces [Serial] should delete fast enough (90 percent of 100 namespaces in 150 seconds)
Kubernetes e2e suite [It] [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota [Feature:PodPriority] should verify ResourceQuota's multiple priority class scope (quota set to pod count: 2) against 2 pods with same priority classes.
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota [Feature:PodPriority] should verify ResourceQuota's priority class scope (cpu, memory quota set) against a pod with same priority class.
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota [Feature:PodPriority] should verify ResourceQuota's priority class scope (quota set to pod count: 1) against 2 pods with different priority class.
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota [Feature:PodPriority] should verify ResourceQuota's priority class scope (quota set to pod count: 1) against 2 pods with same priority class.
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota [Feature:PodPriority] should verify ResourceQuota's priority class scope (quota set to pod count: 1) against a pod with different priority class (ScopeSelectorOpExists).
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota [Feature:PodPriority] should verify ResourceQuota's priority class scope (quota set to pod count: 1) against a pod with different priority class (ScopeSelectorOpNotIn).
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota [Feature:PodPriority] should verify ResourceQuota's priority class scope (quota set to pod count: 1) against a pod with same priority class.
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota [Feature:ScopeSelectors] should verify ResourceQuota with best effort scope using scope-selectors.
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota [Feature:ScopeSelectors] should verify ResourceQuota with terminating scopes through scope selectors.
Kubernetes e2e suite [It] [sig-api-machinery] Servers with support for API chunking should support continue listing from the last key if the original version has been compacted away, though the list is inconsistent [Slow]
Kubernetes e2e suite [It] [sig-api-machinery] StorageVersion resources [Feature:StorageVersionAPI] storage version with non-existing id should be GC'ed
Kubernetes e2e suite [It] [sig-api-machinery] kube-apiserver identity [Feature:APIServerIdentity] kube-apiserver identity should persist after restart [Disruptive]
Kubernetes e2e suite [It] [sig-apps] ControllerRevision [Serial] should manage the lifecycle of a ControllerRevision [Conformance]
Kubernetes e2e suite [It] [sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]
Kubernetes e2e suite [It] [sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]
Kubernetes e2e suite [It] [sig-apps] Daemon set [Serial] should list and delete a collection of DaemonSets [Conformance]
Kubernetes e2e suite [It] [sig-apps] Daemon set [Serial] should not update pod when spec was updated and update strategy is OnDelete
Kubernetes e2e suite [It] [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]
Kubernetes e2e suite [It] [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]
Kubernetes e2e suite [It] [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]
Kubernetes e2e suite [It] [sig-apps] Daemon set [Serial] should run and stop complex daemon with node affinity
Kubernetes e2e suite [It] [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]
Kubernetes e2e suite [It] [sig-apps] Daemon set [Serial] should surge pods onto nodes when spec was updated and update strategy is RollingUpdate
Kubernetes e2e suite [It] [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
Kubernetes e2e suite [It] [sig-apps] Daemon set [Serial] should verify changes to a daemon set status [Conformance]
Kubernetes e2e suite [It] [sig-apps] DaemonRestart [Disruptive] Controller Manager should not create/delete replicas across restart
Kubernetes e2e suite [It] [sig-apps] DaemonRestart [Disruptive] Kube-proxy should recover after being killed accidentally
Kubernetes e2e suite [It] [sig-apps] DaemonRestart [Disruptive] Kubelet should not restart containers across restart
Kubernetes e2e suite [It] [sig-apps] DaemonRestart [Disruptive] Scheduler should continue assigning pods to nodes across restart
Kubernetes e2e suite [It] [sig-apps] DisruptionController evictions: maxUnavailable deny evictions, integer => should not allow an eviction [Serial]
Kubernetes e2e suite [It] [sig-apps] DisruptionController evictions: too few pods, replicaSet, percentage => should not allow an eviction [Serial]
Kubernetes e2e suite [It] [sig-apps] Job should run a job to completion with CPU requests [Serial]
Kubernetes e2e suite [It] [sig-apps] ReplicaSet should serve a basic image on each replica with a private image
Kubernetes e2e suite [It] [sig-apps] ReplicationController should serve a basic image on each replica with a private image
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
Kubernetes e2e suite [It] [sig-apps] StatefulSet Deploy clustered applications [Feature:StatefulSet] [Slow] should creating a working CockroachDB cluster
Kubernetes e2e suite [It] [sig-apps] StatefulSet Deploy clustered applications [Feature:StatefulSet] [Slow] should creating a working mysql cluster
Kubernetes e2e suite [It] [sig-apps] StatefulSet Deploy clustered applications [Feature:StatefulSet] [Slow] should creating a working redis cluster
Kubernetes e2e suite [It] [sig-apps] StatefulSet Deploy clustered applications [Feature:StatefulSet] [Slow] should creating a working zookeeper cluster
Kubernetes e2e suite [It] [sig-apps] StatefulSet Non-retain StatefulSetPersistentVolumeClaimPolicy [Feature:StatefulSetAutoDeletePVC] should delete PVCs after adopting pod (WhenDeleted)
Kubernetes e2e suite [It] [sig-apps] StatefulSet Non-retain StatefulSetPersistentVolumeClaimPolicy [Feature:StatefulSetAutoDeletePVC] should delete PVCs after adopting pod (WhenScaled) [Feature:StatefulSetAutoDeletePVC]
Kubernetes e2e suite [It] [sig-apps] StatefulSet Non-retain StatefulSetPersistentVolumeClaimPolicy [Feature:StatefulSetAutoDeletePVC] should delete PVCs with a OnScaledown policy
Kubernetes e2e suite [It] [sig-apps] StatefulSet Non-retain StatefulSetPersistentVolumeClaimPolicy [Feature:StatefulSetAutoDeletePVC] should delete PVCs with a WhenDeleted policy
Kubernetes e2e suite [It] [sig-apps] stateful Upgrade [Feature:StatefulUpgrade] stateful upgrade should maintain a functioning cluster
Kubernetes e2e suite [It] [sig-auth] SelfSubjectReview [Feature:APISelfSubjectReview] should support SelfSubjectReview API operations
Kubernetes e2e suite [It] [sig-auth] ServiceAccount admission controller migration [Feature:BoundServiceAccountTokenVolume] master upgrade should maintain a functioning cluster
Kubernetes e2e suite [It] [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow]
Kubernetes e2e suite [It] [sig-auth] [Feature:NodeAuthenticator] The kubelet can delegate ServiceAccount tokens to the API server
Kubernetes e2e suite [It] [sig-auth] [Feature:NodeAuthenticator] The kubelet's main port 10250 should reject requests with no credentials
Kubernetes e2e suite [It] [sig-auth] [Feature:NodeAuthorizer] A node shouldn't be able to create another node
Kubernetes e2e suite [It] [sig-auth] [Feature:NodeAuthorizer] A node shouldn't be able to delete another node
Kubernetes e2e suite [It] [sig-auth] [Feature:NodeAuthorizer] Getting a non-existent configmap should exit with the Forbidden error, not a NotFound error
Kubernetes e2e suite [It] [sig-auth] [Feature:NodeAuthorizer] Getting a non-existent secret should exit with the Forbidden error, not a NotFound error
Kubernetes e2e suite [It] [sig-auth] [Feature:NodeAuthorizer] Getting a secret for a workload the node has access to should succeed
Kubernetes e2e suite [It] [sig-auth] [Feature:NodeAuthorizer] Getting an existing configmap should exit with the Forbidden error
Kubernetes e2e suite [It] [sig-auth] [Feature:NodeAuthorizer] Getting an existing secret should exit with the Forbidden error
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaler scalability [Slow] CA ignores unschedulable pods while scheduling schedulable pods [Feature:ClusterAutoscalerScalability6]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaler scalability [Slow] should scale down empty nodes [Feature:ClusterAutoscalerScalability3]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaler scalability [Slow] should scale down underutilized nodes [Feature:ClusterAutoscalerScalability4]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaler scalability [Slow] should scale up at all [Feature:ClusterAutoscalerScalability1]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaler scalability [Slow] should scale up twice [Feature:ClusterAutoscalerScalability2]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaler scalability [Slow] shouldn't scale down with underutilized nodes due to host port conflicts [Feature:ClusterAutoscalerScalability5]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] Should be able to scale a node group down to 0[Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] Should be able to scale a node group up from 0[Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] Should not scale GPU pool up if pod does not require GPUs [GpuType:] [Feature:ClusterSizeAutoscalingGpu]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] Should scale down GPU pool from 1 [GpuType:] [Feature:ClusterSizeAutoscalingGpu]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] Should scale up GPU pool from 0 [GpuType:] [Feature:ClusterSizeAutoscalingGpu]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] Should scale up GPU pool from 1 [GpuType:] [Feature:ClusterSizeAutoscalingGpu]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] Shouldn't perform scale up operation and should list unhealthy status if most of the cluster is broken[Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should add node to the particular mig [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should be able to scale down by draining multiple pods one by one as dictated by pdb[Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should be able to scale down by draining system pods with pdb[Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should be able to scale down when rescheduling a pod is required and pdb allows for it[Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should correctly scale down after a node is not needed [Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should correctly scale down after a node is not needed and one node is broken [Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should correctly scale down after a node is not needed when there is non autoscaled pool[Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should disable node pool autoscaling [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should increase cluster size if pending pods are small [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should increase cluster size if pending pods are small and one node is broken [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should increase cluster size if pending pods are small and there is another node pool that is not autoscaled [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should increase cluster size if pod requesting EmptyDir volume is pending [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should increase cluster size if pod requesting volume is pending [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should increase cluster size if pods are pending due to host port conflict [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should increase cluster size if pods are pending due to pod anti-affinity [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should scale down when expendable pod is running [Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should scale up correct target pool [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should scale up when non expendable pod is created [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't be able to scale down when rescheduling a pod is required, but pdb doesn't allow drain[Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't increase cluster size if pending pod is too large [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't scale down when non expendable pod is running [Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't scale up when expendable pod is created [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't scale up when expendable pod is preempted [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't trigger additional scale-ups during processing scale-up [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] DNS horizontal autoscaling [Serial] [Slow] kube-dns-autoscaler should scale kube-dns pods when cluster size changed
Kubernetes e2e suite [It] [sig-autoscaling] DNS horizontal autoscaling kube-dns-autoscaler should scale kube-dns pods in both nonfaulty and faulty scenarios
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:ClusterSizeAutoscalingScaleUp] [Slow] Autoscaling Autoscaling a service from 1 pod and 3 nodes to 8 pods and >=4 nodes takes less than 15 minutes
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) CustomResourceDefinition Should scale with a CRD targetRef
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) ReplicationController light Should scale from 1 pod to 2 pods
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) ReplicationController light [Slow] Should scale from 2 pods to 1 pod
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] Deployment (Container Resource) Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods using Average Utilization for aggregation
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] Deployment (Container Resource) Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods using Average Value for aggregation
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] Deployment (Pod Resource) Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods using Average Utilization for aggregation
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] Deployment (Pod Resource) Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods using Average Value for aggregation
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] Deployment (Pod Resource) Should scale from 5 pods to 3 pods and then from 3 pods to 1 pod using Average Utilization for aggregation
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicaSet Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicaSet Should scale from 5 pods to 3 pods and then from 3 pods to 1 pod
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicaSet with idle sidecar (ContainerResource use case) Should not scale up on a busy sidecar with an idle application
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicaSet with idle sidecar (ContainerResource use case) Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods on a busy application with an idle sidecar container
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicationController Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods and verify decision stability
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicationController Should scale from 5 pods to 3 pods and then from 3 pods to 1 pod and verify decision stability
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: Memory) [Serial] [Slow] Deployment (Container Resource) Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods using Average Utilization for aggregation
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: Memory) [Serial] [Slow] Deployment (Container Resource) Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods using Average Value for aggregation
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: Memory) [Serial] [Slow] Deployment (Pod Resource) Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods using Average Utilization for aggregation
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: Memory) [Serial] [Slow] Deployment (Pod Resource) Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods using Average Value for aggregation
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) with autoscaling disabled shouldn't scale down
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) with autoscaling disabled shouldn't scale up
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) with both scale up and down controls configured should keep recommendation within the range over two stabilization windows
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) with both scale up and down controls configured should keep recommendation within the range with stabilization window and pod limit rate
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) with long upscale stabilization window should scale up only after the stabilization period
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) with scale limited by number of Pods rate should scale down no more than given number of Pods per minute
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) with scale limited by number of Pods rate should scale up no more than given number of Pods per minute
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) with scale limited by percentage should scale down no more than given percentage of current Pods per minute
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) with scale limited by percentage should scale up no more than given percentage of current Pods per minute
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) with short downscale stabilization window should scale down soon after the stabilization period
Kubernetes e2e suite [It] [sig-autoscaling] [HPA] [Feature:CustomMetricsAutoscaling] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) with Custom Metric of type Object from Stackdriver should scale down
Kubernetes e2e suite [It] [sig-autoscaling] [HPA] [Feature:CustomMetricsAutoscaling] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) with Custom Metric of type Object from Stackdriver should scale down to 0
Kubernetes e2e suite [It] [sig-autoscaling] [HPA] [Feature:CustomMetricsAutoscaling] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) with Custom Metric of type Pod from Stackdriver should scale down
Kubernetes e2e suite [It] [sig-autoscaling] [HPA] [Feature:CustomMetricsAutoscaling] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) with Custom Metric of type Pod from Stackdriver should scale down with Prometheus
Kubernetes e2e suite [It] [sig-autoscaling] [HPA] [Feature:CustomMetricsAutoscaling] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) with Custom Metric of type Pod from Stackdriver should scale up with two metrics
Kubernetes e2e suite [It] [sig-autoscaling] [HPA] [Feature:CustomMetricsAutoscaling] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) with External Metric from Stackdriver should scale down with target average value
Kubernetes e2e suite [It] [sig-autoscaling] [HPA] [Feature:CustomMetricsAutoscaling] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) with External Metric from Stackdriver should scale down with target value
Kubernetes e2e suite [It] [sig-autoscaling] [HPA] [Feature:CustomMetricsAutoscaling] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) with External Metric from Stackdriver should scale up with two metrics
Kubernetes e2e suite [It] [sig-autoscaling] [HPA] [Feature:CustomMetricsAutoscaling] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) with multiple metrics of different types should not scale down when one metric is missing (Container Resource and External Metrics)
Kubernetes e2e suite [It] [sig-autoscaling] [HPA] [Feature:CustomMetricsAutoscaling] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) with multiple metrics of different types should not scale down when one metric is missing (Pod and Object Metrics)
Kubernetes e2e suite [It] [sig-autoscaling] [HPA] [Feature:CustomMetricsAutoscaling] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) with multiple metrics of different types should scale up when one metric is missing (Pod and External metrics)
Kubernetes e2e suite [It] [sig-autoscaling] [HPA] [Feature:CustomMetricsAutoscaling] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) with multiple metrics of different types should scale up when one metric is missing (Resource and Object metrics)
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl taint [Serial] should remove all the taints with the same key off a node
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl taint [Serial] should update the taint on a node
Kubernetes e2e suite [It] [sig-cli] Kubectl client Simple pod should return command exit codes [Slow] running a failing command with --leave-stdin-open
Kubernetes e2e suite [It] [sig-cli] Kubectl client Simple pod should return command exit codes [Slow] running a failing command without --restart=Never
Kubernetes e2e suite [It] [sig-cli] Kubectl client Simple pod should return command exit codes [Slow] running a failing command without --restart=Never, but with --rm
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Addon update should propagate add-on file changes [Slow]
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Downgrade [Feature:Downgrade] cluster downgrade should maintain a functioning cluster [Feature:ClusterDowngrade]
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] GKE node pools [Feature:GKENodePool] should create a cluster with multiple node pools [Feature:GKENodePool]
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] HA-master [Feature:HAMaster] survive addition/removal replicas different zones [Serial][Disruptive]
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] HA-master [Feature:HAMaster] survive addition/removal replicas multizone workers [Serial][Disruptive]
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] HA-master [Feature:HAMaster] survive addition/removal replicas same zone [Serial][Disruptive]
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Nodes [Disruptive] Resize [Slow] should be able to add nodes
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Nodes [Disruptive] Resize [Slow] should be able to delete nodes
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Ports Security Check [Feature:KubeletSecurity] should not be able to proxy to cadvisor port 4194 using proxy subresource
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Ports Security Check [Feature:KubeletSecurity] should not be able to proxy to the readonly kubelet port 10255 using proxy subresource
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Ports Security Check [Feature:KubeletSecurity] should not have port 10255 open on its all public IP addresses
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Ports Security Check [Feature:KubeletSecurity] should not have port 4194 open on its all public IP addresses
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] each node by dropping all inbound packets for a while and ensure they function afterwards
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] each node by dropping all outbound packets for a while and ensure they function afterwards
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] each node by ordering clean reboot and ensure they function upon restart
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] each node by ordering unclean reboot and ensure they function upon restart
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] each node by switching off the network interface and ensure they function upon switch on
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] each node by triggering kernel panic and ensure they function upon restart
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Recreate [Feature:Recreate] recreate nodes and ensure they function upon restart
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Restart [Disruptive] should restart all nodes and ensure all nodes and pods recover
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Upgrade [Feature:Upgrade] cluster upgrade should maintain a functioning cluster [Feature:ClusterUpgrade]
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Upgrade [Feature:Upgrade] master upgrade should maintain a functioning cluster [Feature:MasterUpgrade]
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] [Disruptive]NodeLease NodeLease deletion node lease should be deleted when corresponding node is deleted
Kubernetes e2e suite [It] [sig-cloud-provider] [Feature:CloudProvider][Disruptive] Nodes should be deleted on API server if it doesn't exist in the cloud provider
Kubernetes e2e suite [It] [sig-cluster-lifecycle] [Feature:BootstrapTokens] should delete the signed bootstrap tokens from clusterInfo ConfigMap when bootstrap token is deleted
Kubernetes e2e suite [It] [sig-cluster-lifecycle] [Feature:BootstrapTokens] should delete the token secret when the secret expired
Kubernetes e2e suite [It] [sig-cluster-lifecycle] [Feature:BootstrapTokens] should not delete the token secret when the secret is not expired
Kubernetes e2e suite [It] [sig-cluster-lifecycle] [Feature:BootstrapTokens] should resign the bootstrap tokens when the clusterInfo ConfigMap updated [Serial][Disruptive]
Kubernetes e2e suite [It] [sig-cluster-lifecycle] [Feature:BootstrapTokens] should sign the new added bootstrap tokens
Kubernetes e2e suite [It] [sig-instrumentation] Logging soak [Performance] [Slow] [Disruptive] should survive logging 1KB every 1s seconds, for a duration of 2m0s
Kubernetes e2e suite [It] [sig-instrumentation] Stackdriver Monitoring should have accelerator metrics [Feature:StackdriverAcceleratorMonitoring]
Kubernetes e2e suite [It] [sig-instrumentation] Stackdriver Monitoring should have cluster metrics [Feature:StackdriverMonitoring]
Kubernetes e2e suite [It] [sig-instrumentation] Stackdriver Monitoring should run Custom Metrics - Stackdriver Adapter for external metrics [Feature:StackdriverExternalMetrics]
Kubernetes e2e suite [It] [sig-instrumentation] Stackdriver Monitoring should run Custom Metrics - Stackdriver Adapter for new resource model [Feature:StackdriverCustomMetrics]
Kubernetes e2e suite [It] [sig-instrumentation] Stackdriver Monitoring should run Custom Metrics - Stackdriver Adapter for old resource model [Feature:StackdriverCustomMetrics]
Kubernetes e2e suite [It] [sig-instrumentation] Stackdriver Monitoring should run Stackdriver Metadata Agent [Feature:StackdriverMetadataAgent]
Kubernetes e2e suite [It] [sig-network] ClusterDns [Feature:Example] should create pod that uses dns
Kubernetes e2e suite [It] [sig-network] DNS configMap nameserver Change stubDomain should be able to change stubDomain configuration [Slow][Serial]
Kubernetes e2e suite [It] [sig-network] DNS configMap nameserver Forward PTR lookup should forward PTR records lookup to upstream nameserver [Slow][Serial]
Kubernetes e2e suite [It] [sig-network] DNS configMap nameserver Forward external name lookup should forward externalname lookup to upstream nameserver [Slow][Serial]
Kubernetes e2e suite [It] [sig-network] DNS should provide DNS for the cluster [Provider:GCE]
Kubernetes e2e suite [It] [sig-network] Firewall rule [Slow] [Serial] should create valid firewall rules for LoadBalancer type service
Kubernetes e2e suite [It] [sig-network] Firewall rule control plane should not expose well-known ports
Kubernetes e2e suite [It] [sig-network] Firewall rule should have correct firewall rules for e2e cluster
Kubernetes e2e suite [It] [sig-network] HostPort validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance]
Kubernetes e2e suite [It] [sig-network] IngressClass [Feature:Ingress] should allow IngressClass to have Namespace-scoped parameters [Serial]
Kubernetes e2e suite [It] [sig-network] IngressClass [Feature:Ingress] should choose the one with the later CreationTimestamp, if equal the one with the lower name when two ingressClasses are marked as default[Serial]
Kubernetes e2e suite [It] [sig-network] IngressClass [Feature:Ingress] should not set default value if no default IngressClass [Serial]
Kubernetes e2e suite [It] [sig-network] IngressClass [Feature:Ingress] should set default value on new IngressClass [Serial]
Kubernetes e2e suite [It] [sig-network] KubeProxy should set TCP CLOSE_WAIT timeout [Privileged]
Kubernetes e2e suite [It] [sig-network] LoadBalancers ESIPP [Slow] should handle updates to ExternalTrafficPolicy field
Kubernetes e2e suite [It] [sig-network] LoadBalancers ESIPP [Slow] should only target nodes with endpoints
Kubernetes e2e suite [It] [sig-network] LoadBalancers ESIPP [Slow] should work for type=LoadBalancer
Kubernetes e2e suite [It] [sig-network] LoadBalancers ESIPP [Slow] should work for type=NodePort
Kubernetes e2e suite [It] [sig-network] LoadBalancers ESIPP [Slow] should work from pods
Kubernetes e2e suite [It] [sig-network] LoadBalancers should be able to change the type and ports of a TCP service [Slow]
Kubernetes e2e suite [It] [sig-network] LoadBalancers should be able to change the type and ports of a UDP service [Slow]
Kubernetes e2e suite [It] [sig-network] LoadBalancers should be able to create LoadBalancer Service without NodePort and change it [Slow]
Kubernetes e2e suite [It] [sig-network] LoadBalancers should be able to create an internal type load balancer [Slow]
Kubernetes e2e suite [It] [sig-network] LoadBalancers should be able to preserve UDP traffic when server pod cycles for a LoadBalancer service on different nodes
Kubernetes e2e suite [It] [sig-network] LoadBalancers should be able to preserve UDP traffic when server pod cycles for a LoadBalancer service on the same nodes
Kubernetes e2e suite [It] [sig-network] LoadBalancers should be able to switch session affinity for LoadBalancer service with ESIPP off [Slow] [LinuxOnly]
Kubernetes e2e suite [It] [sig-network] LoadBalancers should be able to switch session affinity for LoadBalancer service with ESIPP on [Slow] [LinuxOnly]
Kubernetes e2e suite [It] [sig-network] LoadBalancers should handle load balancer cleanup finalizer for service [Slow]
Kubernetes e2e suite [It] [sig-network] LoadBalancers should have session affinity work for LoadBalancer service with ESIPP off [Slow] [LinuxOnly]
Kubernetes e2e suite [It] [sig-network] LoadBalancers should have session affinity work for LoadBalancer service with ESIPP on [Slow] [LinuxOnly]
Kubernetes e2e suite [It] [sig-network] LoadBalancers should only allow access from service loadbalancer source ranges [Slow]
Kubernetes e2e suite [It] [sig-network] LoadBalancers should reconcile LB health check interval [Slow][Serial][Disruptive]
Kubernetes e2e suite [It] [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:Ingress] should conform to Ingress spec
Kubernetes e2e suite [It] [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:NEG] [Flaky] rolling update backend pods should not cause service disruption
Kubernetes e2e suite [It] [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:NEG] [Flaky] should be able to create a ClusterIP service
Kubernetes e2e suite [It] [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:NEG] [Flaky] should be able to switch between IG and NEG modes
Kubernetes e2e suite [It] [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:NEG] [Flaky] should conform to Ingress spec
Kubernetes e2e suite [It] [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:NEG] [Flaky] should create NEGs for all ports with the Ingress annotation, and NEGs for the standalone annotation otherwise
Kubernetes e2e suite [It] [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:NEG] [Flaky] should sync endpoints for both Ingress-referenced NEG and standalone NEG
Kubernetes e2e suite [It] [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:NEG] [Flaky] should sync endpoints to NEG
Kubernetes e2e suite [It] [sig-network] Loadbalancing: L7 Scalability GCE [Slow] [Serial] [Feature:IngressScale] Creating and updating ingresses should happen promptly with small/medium/large amount of ingresses
Kubernetes e2e suite [It] [sig-network] Netpol API should support creating NetworkPolicy with Status subresource [Feature:NetworkPolicyStatus]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should allow egress access on one named port [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should allow egress access to server in CIDR block [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should allow ingress access from namespace on one named port [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should allow ingress access from updated namespace [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should allow ingress access from updated pod [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should allow ingress access on one named port [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should deny egress from all pods in a namespace [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should deny egress from pods based on PodSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should deny ingress access to updated pod [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should deny ingress from pods on other namespaces [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce egress policy allowing traffic to a server in a different namespace based on PodSelector and NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce except clause while egress access to server in CIDR block [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce ingress policy allowing any port traffic to a server on a specific protocol [Feature:NetworkPolicy] [Feature:UDP]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce multiple egress policies with egress allow-all policy taking precedence [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce multiple ingress policies with ingress allow-all policy taking precedence [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce multiple, stacked policies with overlapping podSelectors [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce policies to check ingress and egress policies can be controlled independently based on PodSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce policy based on Multiple PodSelectors and NamespaceSelectors [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce policy based on NamespaceSelector with MatchExpressions using default ns label [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce policy based on NamespaceSelector with MatchExpressions[Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce policy based on PodSelector and NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce policy based on PodSelector or NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce policy based on PodSelector with MatchExpressions[Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce policy based on Ports [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce policy based on any PodSelectors [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce policy to allow ingress traffic for a target [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce policy to allow ingress traffic from pods in all namespaces [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce policy to allow traffic based on NamespaceSelector with MatchLabels using default ns label [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce policy to allow traffic from pods within server namespace based on PodSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce policy to allow traffic only from a different namespace, based on NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce policy to allow traffic only from a pod in a different namespace based on PodSelector and NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce updated policy [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should ensure an IP overlapping both IPBlock.CIDR and IPBlock.Except is allowed [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should not allow access by TCP when a policy specifies only UDP [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should not mistakenly treat 'protocol: SCTP' as 'protocol: TCP', even if the plugin doesn't support SCTP [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should properly isolate pods that are selected by a policy allowing SCTP, even if the plugin doesn't support SCTP [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should stop enforcing policies after they are deleted [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should support a 'default-deny-all' policy [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should support a 'default-deny-ingress' policy [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should support allow-all policy [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should support denying of egress traffic on the client side (even if the server explicitly allows this traffic) [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should work with Ingress, Egress specified together [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol [Feature:SCTPConnectivity][LinuxOnly][Disruptive] NetworkPolicy between server and client using SCTP should enforce policy based on Ports [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol [Feature:SCTPConnectivity][LinuxOnly][Disruptive] NetworkPolicy between server and client using SCTP should enforce policy to allow traffic only from a pod in a different namespace based on PodSelector and NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol [Feature:SCTPConnectivity][LinuxOnly][Disruptive] NetworkPolicy between server and client using SCTP should support a 'default-deny-ingress' policy [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol [LinuxOnly] NetworkPolicy between server and client using UDP should enforce policy based on Ports [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol [LinuxOnly] NetworkPolicy between server and client using UDP should enforce policy to allow traffic only from a pod in a different namespace based on PodSelector and NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol [LinuxOnly] NetworkPolicy between server and client using UDP should support a 'default-deny-ingress' policy [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicy [Feature:SCTPConnectivity][LinuxOnly][Disruptive] NetworkPolicy between server and client using SCTP should enforce policy based on Ports [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicy [Feature:SCTPConnectivity][LinuxOnly][Disruptive] NetworkPolicy between server and client using SCTP should enforce policy to allow traffic only from a pod in a different namespace based on PodSelector and NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicy [Feature:SCTPConnectivity][LinuxOnly][Disruptive] NetworkPolicy between server and client using SCTP should support a 'default-deny' policy [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should allow egress access on one named port [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should allow egress access to server in CIDR block [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should allow ingress access from namespace on one named port [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should allow ingress access from updated namespace [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should allow ingress access from updated pod [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should allow ingress access on one named port [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should deny ingress access to updated pod [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce egress policy allowing traffic to a server in a different namespace based on PodSelector and NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce except clause while egress access to server in CIDR block [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce multiple egress policies with egress allow-all policy taking precedence [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce multiple ingress policies with ingress allow-all policy taking precedence [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce multiple, stacked policies with overlapping podSelectors [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policies to check ingress and egress policies can be controlled independently based on PodSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy based on NamespaceSelector with MatchExpressions[Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy based on PodSelector and NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy based on PodSelector or NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy based on PodSelector with MatchExpressions[Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy based on Ports [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy to allow traffic from pods within server namespace based on PodSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy to allow traffic only from a different namespace, based on NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy to allow traffic only from a pod in a different namespace based on PodSelector and NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce updated policy [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should ensure an IP overlapping both IPBlock.CIDR and IPBlock.Except is allowed [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should not allow access by TCP when a policy specifies only SCTP [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should stop enforcing policies after they are deleted [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should support a 'default-deny-all' policy [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should support a 'default-deny-ingress' policy [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should support allow-all policy [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should work with Ingress,Egress specified together [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: sctp [LinuxOnly][Feature:SCTPConnectivity][Disruptive]
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Pods should function for node-pod communication: sctp [LinuxOnly][Feature:SCTPConnectivity][Disruptive]
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should be able to handle large requests: http
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should be able to handle large requests: udp
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should function for client IP based session affinity: http [LinuxOnly]
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should function for client IP based session affinity: udp [LinuxOnly]
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should function for endpoint-Service: http
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should function for endpoint-Service: sctp [Feature:SCTPConnectivity][Disruptive]
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should function for endpoint-Service: udp
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should function for multiple endpoint-Services with same selector
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should function for node-Service: http
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should function for node-Service: sctp [Feature:SCTPConnectivity][Disruptive]
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should function for node-Service: udp
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should function for pod-Service: http
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should function for pod-Service: sctp [Feature:SCTPConnectivity][Disruptive]
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should function for pod-Service: udp
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should function for service endpoints using hostNetwork
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should support basic nodePort: udp functionality
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should update endpoints: http
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should update endpoints: udp
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should update nodePort: http [Slow]
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should update nodePort: udp [Slow]
Kubernetes e2e suite [It] [sig-network] Networking IPerf2 [Feature:Networking-Performance] should run iperf2
Kubernetes e2e suite [It] [sig-network] Networking should check kube-proxy urls
Kubernetes e2e suite [It] [sig-network] Networking should provide Internet connection for containers [Feature:Networking-IPv4]
Kubernetes e2e suite [It] [sig-network] Networking should provide Internet connection for containers [Feature:Networking-IPv6][Experimental][LinuxOnly]
Kubernetes e2e suite [It] [sig-network] Networking should provider Internet connection for containers using DNS [Feature:Networking-DNS]
Kubernetes e2e suite [It] [sig-network] Networking should recreate its iptables rules if they are deleted [Disruptive]
Kubernetes e2e suite [It] [sig-network] NoSNAT [Feature:NoSNAT] [Slow] Should be able to send traffic between Pods without SNAT
Kubernetes e2e suite [It] [sig-network] SCTP [LinuxOnly] should create a ClusterIP Service with SCTP ports
Kubernetes e2e suite [It] [sig-network] SCTP [LinuxOnly] should create a Pod with SCTP HostPort
Kubernetes e2e suite [It] [sig-network] Services GCE [Slow] should be able to create and tear down a standard-tier load balancer [Slow]
Kubernetes e2e suite [It] [sig-network] Services should be able to connect to terminating and unready endpoints if PublishNotReadyAddresses is true
Kubernetes e2e suite [It] [sig-network] Services should be able to update service type to NodePort listening on same port number but different protocols
Kubernetes e2e suite [It] [sig-network] Services should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node
Kubernetes e2e suite [It] [sig-network] Services should be rejected for evicted pods (no endpoints exist)
Kubernetes e2e suite [It] [sig-network] Services should be rejected when no endpoints exist
Kubernetes e2e suite [It] [sig-network] Services should create endpoints for unready pods
Kubernetes e2e suite [It] [sig-network] Services should fail health check node port if there are only terminating endpoints [Feature:ProxyTerminatingEndpoints]
Kubernetes e2e suite [It] [sig-network] Services should fallback to local terminating endpoints when there are no ready endpoints with externalTrafficPolicy=Local [Feature:ProxyTerminatingEndpoints]
Kubernetes e2e suite [It] [sig-network] Services should fallback to local terminating endpoints when there are no ready endpoints with internalTrafficPolicy=Local [Feature:ProxyTerminatingEndpoints]
Kubernetes e2e suite [It] [sig-network] Services should fallback to terminating endpoints when there are no ready endpoints with externallTrafficPolicy=Cluster [Feature:ProxyTerminatingEndpoints]
Kubernetes e2e suite [It] [sig-network] Services should fallback to terminating endpoints when there are no ready endpoints with internalTrafficPolicy=Cluster [Feature:ProxyTerminatingEndpoints]
Kubernetes e2e suite [It] [sig-network] Services should respect internalTrafficPolicy=Local Pod (hostNetwork: true) to Pod [Feature:ServiceInternalTrafficPolicy]
Kubernetes e2e suite [It] [sig-network] Services should respect internalTrafficPolicy=Local Pod and Node, to Pod (hostNetwork: true) [Feature:ServiceInternalTrafficPolicy]
Kubernetes e2e suite [It] [sig-network] Services should respect internalTrafficPolicy=Local Pod to Pod [Feature:ServiceInternalTrafficPolicy]
Kubernetes e2e suite [It] [sig-network] Services should work after restarting apiserver [Disruptive]
Kubernetes e2e suite [It] [sig-network] Services should work after restarting kube-proxy [Disruptive]
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should be able to handle large requests: http
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should be able to handle large requests: udp
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should function for client IP based session affinity: http [LinuxOnly]
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should function for client IP based session affinity: udp [LinuxOnly]
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should function for endpoint-Service: http
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should function for endpoint-Service: udp
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should function for node-Service: http
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should function for node-Service: udp
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should function for pod-Service: http
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should function for pod-Service: sctp [Feature:SCTPConnectivity][Disruptive]
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should function for pod-Service: udp
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should function for service endpoints using hostNetwork
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should update endpoints: http
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should update endpoints: udp
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] should be able to reach pod on ipv4 and ipv6 ip
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] should create a single stack service with cluster ip from primary service range
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] should create pod, add ipv6 and ipv4 ip to pod ips
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] should create service with ipv4 cluster ip
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] should create service with ipv4,v6 cluster ip
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] should create service with ipv6 cluster ip
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] should create service with ipv6,v4 cluster ip
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] should have ipv4 and ipv6 internal node ip
Kubernetes e2e suite [It] [sig-network] [Feature:PerformanceDNS][Serial] Should answer DNS query for maximum number of services per cluster
Kubernetes e2e suite [It] [sig-network] [Feature:Topology Hints] should distribute endpoints evenly
Kubernetes e2e suite [It] [sig-network] kube-proxy migration [Feature:KubeProxyDaemonSetMigration] Downgrade kube-proxy from a DaemonSet to static pods should maintain a functioning cluster [Feature:KubeProxyDaemonSetDowngrade]
Kubernetes e2e suite [It] [sig-network] kube-proxy migration [Feature:KubeProxyDaemonSetMigration] Upgrade kube-proxy from static pods to a DaemonSet should maintain a functioning cluster [Feature:KubeProxyDaemonSetUpgrade]
Kubernetes e2e suite [It] [sig-node] AppArmor load AppArmor profiles can disable an AppArmor profile, using unconfined
Kubernetes e2e suite [It] [sig-node] AppArmor load AppArmor profiles should enforce an AppArmor profile
Kubernetes e2e suite [It] [sig-node] DRA [Feature:DynamicResourceAllocation] cluster with delayed allocation supports external claim referenced by multiple containers of multiple pods
Kubernetes e2e suite [It] [sig-node] DRA [Feature:DynamicResourceAllocation] cluster with delayed allocation supports external claim referenced by multiple pods
Kubernetes e2e suite [It] [sig-node] DRA [Feature:DynamicResourceAllocation] cluster with delayed allocation supports init containers
Kubernetes e2e suite [It] [sig-node] DRA [Feature:DynamicResourceAllocation] cluster with delayed allocation supports inline claim referenced by multiple containers
Kubernetes e2e suite [It] [sig-node] DRA [Feature:DynamicResourceAllocation] cluster with delayed allocation supports simple pod referencing external resource claim
Kubernetes e2e suite [It] [sig-node] DRA [Feature:DynamicResourceAllocation] cluster with delayed allocation supports simple pod referencing inline resource claim
Kubernetes e2e suite [It] [sig-node] DRA [Feature:DynamicResourceAllocation] cluster with immediate allocation supports external claim referenced by multiple containers of multiple pods
Kubernetes e2e suite [It] [sig-node] DRA [Feature:DynamicResourceAllocation] cluster with immediate allocation supports external claim referenced by multiple pods
Kubernetes e2e suite [It] [sig-node] DRA [Feature:DynamicResourceAllocation] cluster with immediate allocation supports init containers
Kubernetes e2e suite [It] [sig-node] DRA [Feature:DynamicResourceAllocation] cluster with immediate allocation supports inline claim referenced by multiple containers
Kubernetes e2e suite [It] [sig-node] DRA [Feature:DynamicResourceAllocation] cluster with immediate allocation supports simple pod referencing external resource claim
Kubernetes e2e suite [It] [sig-node] DRA [Feature:DynamicResourceAllocation] cluster with immediate allocation supports simple pod referencing inline resource claim
Kubernetes e2e suite [It] [sig-node] DRA [Feature:DynamicResourceAllocation] driver supports claim and class parameters
Kubernetes e2e suite [It] [sig-node] DRA [Feature:DynamicResourceAllocation] kubelet must not run a pod if a claim is not reserved for it
Kubernetes e2e suite [It] [sig-node] DRA [Feature:DynamicResourceAllocation] kubelet must retry NodePrepareResource
Kubernetes e2e suite [It] [sig-node] DRA [Feature:DynamicResourceAllocation] kubelet must unprepare resources for force-deleted pod
Kubernetes e2e suite [It] [sig-node] DRA [Feature:DynamicResourceAllocation] kubelet registers plugin
Kubernetes e2e suite [It] [sig-node] DRA [Feature:DynamicResourceAllocation] multiple drivers work
Kubernetes e2e suite [It] [sig-node] DRA [Feature:DynamicResourceAllocation] multiple nodes reallocation works
Kubernetes e2e suite [It] [sig-node] DRA [Feature:DynamicResourceAllocation] multiple nodes with network-attached resources schedules onto different nodes
Kubernetes e2e suite [It] [sig-node] DRA [Feature:DynamicResourceAllocation] multiple nodes with node-local resources with delayed allocation uses all resources
Kubernetes e2e suite [It] [sig-node] DRA [Feature:DynamicResourceAllocation] multiple nodes with node-local resources with immediate allocation uses all resources
Kubernetes e2e suite [It] [sig-node] Downward API [Serial] [Disruptive] [NodeFeature:DownwardAPIHugePages] Downward API tests for hugepages should provide container's limits.hugepages-<pagesize> and requests.hugepages-<pagesize> as env vars
Kubernetes e2e suite [It] [sig-node] Downward API [Serial] [Disruptive] [NodeFeature:DownwardAPIHugePages] Downward API tests for hugepages should provide default limits.hugepages-<pagesize> from node allocatable
Kubernetes e2e suite [It] [sig-node] Kubelet [Serial] [Slow] experimental resource usage tracking [Feature:ExperimentalResourceUsageTracking] resource tracking for 100 pods per node
Kubernetes e2e suite [It] [sig-node] Kubelet [Serial] [Slow] regular resource usage tracking [Feature:RegularResourceUsageTracking] resource tracking for 0 pods per node
Kubernetes e2e suite [It] [sig-node] Kubelet [Serial] [Slow] regular resource usage tracking [Feature:RegularResourceUsageTracking] resource tracking for 100 pods per node
Kubernetes e2e suite [It] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]
Kubernetes e2e suite [It] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] only evicts pods without tolerations from tainted nodes
Kubernetes e2e suite [It] [sig-node] NoExecuteTaintManager Single Pod [Serial] doesn't evict pod with tolerations from tainted nodes
Kubernetes e2e suite [It] [sig-node] NoExecuteTaintManager Single Pod [Serial] eventually evict pod with finite tolerations from tainted nodes
Kubernetes e2e suite [It] [sig-node] NoExecuteTaintManager Single Pod [Serial] evicts pods from tainted nodes
Kubernetes e2e suite [It] [sig-node] NoExecuteTaintManager Single Pod [Serial] pods evicted from tainted nodes have pod disruption condition
Kubernetes e2e suite [It] [sig-node] NoExecuteTaintManager Single Pod [Serial] removing taint cancels eviction [Disruptive] [Conformance]
Kubernetes e2e suite [It] [sig-node] NodeProblemDetector should run without error
Kubernetes e2e suite [It] [sig-node] Pod garbage collector [Feature:PodGarbageCollector] [Slow] should handle the creation of 1000 pods
Kubernetes e2e suite [It] [sig-node] Pods should cap back-off at MaxContainerBackOff [Slow][NodeConformance]
Kubernetes e2e suite [It] [sig-node] Pods should have their auto-restart back-off timer reset on image update [Slow][NodeConformance]
Kubernetes e2e suite [It] [sig-node] Probing container should override timeoutGracePeriodSeconds when LivenessProbe field is set [Feature:ProbeTerminationGracePeriod]
Kubernetes e2e suite [It] [sig-node] Probing container should override timeoutGracePeriodSeconds when StartupProbe field is set [Feature:ProbeTerminationGracePeriod]
Kubernetes e2e suite [It] [sig-node] RuntimeClass should run a Pod requesting a RuntimeClass with a configured handler [NodeFeature:RuntimeHandler]
Kubernetes e2e suite [It] [sig-node] RuntimeClass should run a Pod requesting a RuntimeClass with scheduling with taints [Serial]
Kubernetes e2e suite [It] [sig-node] RuntimeClass should run a Pod requesting a RuntimeClass with scheduling without taints
Kubernetes e2e suite [It] [sig-node] Security Context When creating a pod with HostUsers must create the user namespace if set to false [LinuxOnly] [Feature:UserNamespacesStatelessPodsSupport]
Kubernetes e2e suite [It] [sig-node] Security Context When creating a pod with HostUsers must not create the user namespace if set to true [LinuxOnly] [Feature:UserNamespacesStatelessPodsSupport]
Kubernetes e2e suite [It] [sig-node] Security Context When creating a pod with HostUsers should mount all volumes with proper permissions with hostUsers=false [LinuxOnly] [Feature:UserNamespacesStatelessPodsSupport]
Kubernetes e2e suite [It] [sig-node] Security Context When creating a pod with HostUsers should set FSGroup to user inside the container with hostUsers=false [LinuxOnly] [Feature:UserNamespacesStatelessPodsSupport]
Kubernetes e2e suite [It] [sig-node] Security Context should support volume SELinux relabeling [Flaky] [LinuxOnly]
Kubernetes e2e suite [It] [sig-node] Security Context should support volume SELinux relabeling when using hostIPC [Flaky] [LinuxOnly]
Kubernetes e2e suite [It] [sig-node] Security Context should support volume SELinux relabeling when using hostPID [Flaky] [LinuxOnly]
Kubernetes e2e suite [It] [sig-node] Variable Expansion should fail substituting values in a volume subpath with absolute path [Slow] [Conformance]
Kubernetes e2e suite [It] [sig-node] Variable Expansion should fail substituting values in a volume subpath with backticks [Slow] [Conformance]
Kubernetes e2e suite [It] [sig-node] Variable Expansion should succeed in writing subpaths in container [Slow] [Conformance]
Kubernetes e2e suite [It] [sig-node] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance]
Kubernetes e2e suite [It] [sig-node] [Feature:Example] Downward API should create a pod that prints his name and namespace
Kubernetes e2e suite [It] [sig-node] [Feature:Example] Liveness liveness pods should be automatically restarted
Kubernetes e2e suite [It] [sig-node] [Feature:Example] Secret should create a pod that reads a secret
Kubernetes e2e suite [It] [sig-node] crictl should be able to run crictl on the node
Kubernetes e2e suite [It] [sig-node] gpu Upgrade [Feature:GPUUpgrade] cluster downgrade should be able to run gpu pod after downgrade [Feature:GPUClusterDowngrade]
Kubernetes e2e suite [It] [sig-node] gpu Upgrade [Feature:GPUUpgrade] cluster upgrade should be able to run gpu pod after upgrade [Feature:GPUClusterUpgrade]
Kubernetes e2e suite [It] [sig-node] gpu Upgrade [Feature:GPUUpgrade] master upgrade should NOT disrupt gpu pod [Feature:GPUMasterUpgrade]
Kubernetes e2e suite [It] [sig-node] kubelet host cleanup with volume mounts [HostCleanup][Flaky] Host cleanup after disrupting NFS volume [NFS] after stopping the nfs-server and deleting the (active) client pod, the NFS mount and the pod's UID directory should be removed.
Kubernetes e2e suite [It] [sig-node] kubelet host cleanup with volume mounts [HostCleanup][Flaky] Host cleanup after disrupting NFS volume [NFS] after stopping the nfs-server and deleting the (sleeping) client pod, the NFS mount and the pod's UID directory should be removed.
Kubernetes e2e suite [It] [sig-node] kubelet kubectl node-logs <node-name> [Feature:add node log viewer] should return the logs
Kubernetes e2e suite [It] [sig-node] kubelet kubectl node-logs <node-name> [Feature:add node log viewer] should return the logs for the provided path
Kubernetes e2e suite [It] [sig-node] kubelet kubectl node-logs <node-name> [Feature:add node log viewer] should return the logs for the requested service
Kubernetes e2e suite [It] [sig-scheduling] GPUDevicePluginAcrossRecreate [Feature:Recreate] run Nvidia GPU Device Plugin tests with a recreation
Kubernetes e2e suite [It] [sig-scheduling] Multi-AZ Clusters should spread the pods of a replication controller across zones [Serial]
Kubernetes e2e suite [It] [sig-scheduling] Multi-AZ Clusters should spread the pods of a service across zones [Serial]
Kubernetes e2e suite [It] [sig-scheduling] SchedulerPredicates [Serial] PodTopologySpread Filtering validates 4 pods with MaxSkew=1 are evenly distributed into 2 nodes
Kubernetes e2e suite [It] [sig-scheduling] SchedulerPredicates [Serial] validates Pods with non-empty schedulingGates are blocked on scheduling [Feature:PodSchedulingReadiness] [alpha]
Kubernetes e2e suite [It] [sig-scheduling] SchedulerPredicates [Serial] validates local ephemeral storage resource limits of pods that are allowed to run [Feature:LocalStorageCapacityIsolation]
Kubernetes e2e suite [It] [sig-scheduling] SchedulerPredicates [Serial] validates pod overhead is considered along with resource limits of pods that are allowed to run verify pod overhead is accounted for
Kubernetes e2e suite [It] [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]
Kubernetes e2e suite [It] [sig-scheduling] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching
Kubernetes e2e suite [It] [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]
Kubernetes e2e suite [It] [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]
Kubernetes e2e suite [It] [sig-scheduling] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching
Kubernetes e2e suite [It] [sig-scheduling] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching
Kubernetes e2e suite [It] [sig-scheduling] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching
Kubernetes e2e suite [It] [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
Kubernetes e2e suite [It] [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol
Kubernetes e2e suite [It] [sig-scheduling] SchedulerPreemption [Serial] PodTopologySpread Preemption validates proper pods are preempted
Kubernetes e2e suite [It] [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance]
Kubernetes e2e suite [It] [sig-scheduling] SchedulerPreemption [Serial] PriorityClass endpoints verify PriorityClass endpoints can be operated with different HTTP methods [Conformance]
Kubernetes e2e suite [It] [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance]
Kubernetes e2e suite [It] [sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]
Kubernetes e2e suite [It] [sig-scheduling] SchedulerPreemption [Serial] validates pod disruption condition is added to the preempted pod
Kubernetes e2e suite [It] [sig-scheduling] SchedulerPriorities [Serial] Pod should be preferably scheduled to nodes pod can tolerate
Kubernetes e2e suite [It] [sig-scheduling] SchedulerPriorities [Serial] Pod should be scheduled to node that don't match the PodAntiAffinity terms
Kubernetes e2e suite [It] [sig-scheduling] SchedulerPriorities [Serial] PodTopologySpread Scoring validates pod should be preferably scheduled to node which makes the matching pods more evenly distributed
Kubernetes e2e suite [It] [sig-scheduling] [Feature:GPUDevicePlugin] run Nvidia GPU Device Plugin tests
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should support expansion of pvcs created for ephemeral pvcs
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] provisioning should mount multiple PV pointing to the same storage on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with any volume data source [Serial]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with mount options
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with pvc data source in parallel [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] volume-stress multiple pods should access different volumes repeatedly [Slow] [Serial]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] capacity provides storage capacity information
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with different fsgroup applied to the volume contents
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with same fsgroup applied to the volume contents
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with different fsgroup applied to the volume contents
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with same fsgroup skips ownership changes to the volume contents
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with any volume data source [Serial]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with mount options
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source in parallel [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] read-write-once-pod[Feature:ReadWriteOncePod] should block a second pod from using an in-use ReadWriteOncePod volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] read-write-once-pod[Feature:ReadWriteOncePod] should block a second pod from using an in-use ReadWriteOncePod volume on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] volume-stress multiple pods should access different volumes repeatedly [Slow] [Serial]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (delayed binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (delayed binding)] topology should provision a volume and schedule a pod with AllowedTopologies
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ext3)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ext4)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down is usable by a new pod when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down is usable by a new pod with a different SELinux context when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down is usable by a new pod when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down is usable by a new pod with a different SELinux context when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] volume-lifecycle-performance should provision volumes at scale within performance constraints [Slow] [Serial]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits should support volume limits [Serial]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (immediate binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (immediate binding)] topology should provision a volume and schedule a pod with AllowedTopologies
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should mount multiple PV pointing to the same storage on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with any volume data source [Serial]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with mount options
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with pvc data source
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with pvc data source in parallel [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support existing directory
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support non-existent path
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (xfs)][Slow] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic Snapshot (delete policy)] snapshottable-stress[Feature:VolumeSnapshotDataSource] should support snapshotting of many volumes repeatedly [Slow] [Serial]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works, check deletion (ephemeral)
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic Snapshot (retain policy)] snapshottable-stress[Feature:VolumeSnapshotDataSource] should support snapshotting of many volumes repeatedly [Slow] [Serial]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works, check deletion (ephemeral)
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Ephemeral Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Ephemeral Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works, check deletion (ephemeral)
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Ephemeral Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Ephemeral Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works, check deletion (ephemeral)
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should create read-only inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should support multiple inline ephemeral volumes
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs)] volumeLimits should support volume limits [Serial]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should support existing directory
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (ext3)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (ext4)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (xfs)][Slow] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (ext3)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (ext4)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down is usable by a new pod when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down is usable by a new pod with a different SELinux context when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down is usable by a new pod when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down is usable by a new pod with a different SELinux context when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (xfs)][Slow] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works, check deletion (ephemeral)
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works, check deletion (ephemeral)
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should create read-only inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should create read/write inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should support expansion of pvcs created for ephemeral pvcs
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should support multiple inline ephemeral volumes
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should support two pods which have the same volume definition
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] provisioning should mount multiple PV pointing to the same storage on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with any volume data source [Serial]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with mount options
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with pvc data source
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with pvc data source in parallel [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] volume-stress multiple pods should access different volumes repeatedly [Slow] [Serial]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] capacity provides storage capacity information
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with different fsgroup applied to the volume contents
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with same fsgroup applied to the volume contents
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with different fsgroup applied to the volume contents
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with same fsgroup skips ownership changes to the volume contents
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] provisioning should mount multiple PV pointing to the same storage on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with any volume data source [Serial]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with mount options
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source in parallel [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] read-write-once-pod[Feature:ReadWriteOncePod] should block a second pod from using an in-use ReadWriteOncePod volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] read-write-once-pod[Feature:ReadWriteOncePod] should block a second pod from using an in-use ReadWriteOncePod volume on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] volume-stress multiple pods should access different volumes repeatedly [Slow] [Serial]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (delayed binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (delayed binding)] topology should provision a volume and schedule a pod with AllowedTopologies
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ext3)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ext4)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down is usable by a new pod when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down is usable by a new pod with a different SELinux context when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down is usable by a new pod when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down is usable by a new pod with a different SELinux context when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (filesystem volmode)] volume-lifecycle-performance should provision volumes at scale within performance constraints [Slow] [Serial]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits should support volume limits [Serial]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits should verify that all csinodes have volume limits
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (immediate binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (immediate binding)] topology should provision a volume and schedule a pod with AllowedTopologies
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should mount multiple PV pointing to the same storage on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with any volume data source [Serial]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with mount options
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with pvc data source
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with pvc data source in parallel [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support existing directory
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support non-existent path
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (xfs)][Slow] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic Snapshot (delete policy)] snapshottable-stress[Feature:VolumeSnapshotDataSource] should support snapshotting of many volumes repeatedly [Slow] [Serial]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works, check deletion (ephemeral)
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic Snapshot (retain policy)] snapshottable-stress[Feature:VolumeSnapshotDataSource] should support snapshotting of many volumes repeatedly [Slow] [Serial]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works, check deletion (ephemeral)
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Ephemeral Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Ephemeral Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works, check deletion (ephemeral)
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Ephemeral Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Ephemeral Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works, check deletion (ephemeral)
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should create read-only inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should create read/write inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should support expansion of pvcs created for ephemeral pvcs
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should support multiple inline ephemeral volumes
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should support two pods which have the same volume definition
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should create read-only inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should create read/write inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should support expansion of pvcs created for ephemeral pvcs
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should support multiple inline ephemeral volumes
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should support two pods which have the same volume definition
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should create read-only inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should create read/write inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support expansion of pvcs created for ephemeral pvcs
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support multiple inline ephemeral volumes
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support two pods which have the same volume definition
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Generic Ephemeral-volume (default fs)] volumeLimits should support volume limits [Serial]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Generic Ephemeral-volume (default fs)] volumeLimits should verify that all csinodes have volume limits
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] subPath should support existing directory
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (ext3)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (ext4)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (xfs)][Slow] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (block volmode)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (ext3)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (ext4)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down is usable by a new pod when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down is usable by a new pod with a different SELinux context when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down is usable by a new pod when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down is usable by a new pod with a different SELinux context when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (xfs)][Slow] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works, check deletion (ephemeral)
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works, check deletion (ephemeral)
Kubernetes e2e suite [It] [sig-storage] CSI mock volume CSI CSIDriver deployment after pod creation using non-attachable mock driver should bringup pod after deploying CSIDriver attach=false [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI mock volume CSI NodeStage error cases [Slow] should call NodeUnstage after NodeStage ephemeral error
Kubernetes e2e suite [It] [sig-storage] CSI mock volume CSI NodeStage error cases [Slow] should call NodeUnstage after NodeStage success
Kubernetes e2e suite [It] [sig-storage] CSI mock volume CSI NodeStage error cases [Slow] should not call NodeUnstage after NodeStage final error
Kubernetes e2e suite [It] [sig-storage] CSI mock volume CSI NodeStage error cases [Slow] should retry NodeStage after NodeStage ephemeral error
Kubernetes e2e suite [It] [sig-storage] CSI mock volume CSI NodeStage error cases [Slow] should retry NodeStage after NodeStage final error
Kubernetes e2e suite [It] [sig-storage] CSI mock volume CSI NodeUnstage error cases [Slow] should call NodeStage after NodeUnstage success
Kubernetes e2e suite [It] [sig-storage] CSI mock volume CSI NodeUnstage error cases [Slow] two pods: should call NodeStage after previous NodeUnstage final error
Kubernetes e2e suite [It] [sig-storage] CSI mock volume CSI NodeUnstage error cases [Slow] two pods: should call NodeStage after previous NodeUnstage transient error
Kubernetes e2e suite [It] [sig-storage] CSI mock volume CSI Snapshot Controller metrics [Feature:VolumeSnapshotDataSource] snapshot controller should emit dynamic CreateSnapshot, CreateSnapshotAndReady, and DeleteSnapshot metrics
Kubernetes e2e suite [It] [sig-storage] CSI mock volume CSI Snapshot Controller metrics [Feature:VolumeSnapshotDataSource] snapshot controller should emit pre-provisioned CreateSnapshot, CreateSnapshotAndReady, and DeleteSnapshot metrics
Kubernetes e2e suite [It] [sig-storage] CSI mock volume CSI Volume Snapshots [Feature:VolumeSnapshotDataSource] volumesnapshotcontent and pvc in Bound state with deletion timestamp set should not get deleted while snapshot finalizer exists
Kubernetes e2e suite [It] [sig-storage] CSI mock volume CSI Volume Snapshots secrets [Feature:VolumeSnapshotDataSource] volume snapshot create/delete with secrets
Kubernetes e2e suite [It] [sig-storage] CSI mock volume CSI volume limit information using mock driver should report attach limit for generic ephemeral volume when persistent volume is attached [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI mock volume CSI volume limit information using mock driver should report attach limit for persistent volume when generic ephemeral volume is attached [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI mock volume CSI volume limit information using mock driver should report attach limit when limit is bigger than 0 [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI mock volume SELinuxMount [LinuxOnly][Feature:SELinux][Feature:SELinuxMountReadWriteOncePod] should add SELinux mount option to existing mount options
Kubernetes e2e suite [It] [sig-storage] CSI mock volume SELinuxMount [LinuxOnly][Feature:SELinux][Feature:SELinuxMountReadWriteOncePod] should not pass SELinux mount option for CSI driver that does not support SELinux mount
Kubernetes e2e suite [It] [sig-storage] CSI mock volume SELinuxMount [LinuxOnly][Feature:SELinux][Feature:SELinuxMountReadWriteOncePod] should not pass SELinux mount option for Pod without SELinux context
Kubernetes e2e suite [It] [sig-storage] CSI mock volume SELinuxMount [LinuxOnly][Feature:SELinux][Feature:SELinuxMountReadWriteOncePod] should not pass SELinux mount option for RWO volume
Kubernetes e2e suite [It] [sig-storage] CSI mock volume SELinuxMount [LinuxOnly][Feature:SELinux][Feature:SELinuxMountReadWriteOncePod] should pass SELinux mount option for RWOP volume and Pod with SELinux context set
Kubernetes e2e suite [It] [sig-storage] ConfigMap Should fail non-optional pod creation due to configMap object does not exist [Slow]
Kubernetes e2e suite [It] [sig-storage] ConfigMap Should fail non-optional pod creation due to the key in the configMap object does not exist [Slow]
Kubernetes e2e suite [It] [sig-storage] Downward API [Serial] [Disruptive] [Feature:EphemeralStorage] Downward API tests for local ephemeral storage should provide container's limits.ephemeral-storage and requests.ephemeral-storage as env vars
Kubernetes e2e suite [It] [sig-storage] Downward API [Serial] [Disruptive] [Feature:EphemeralStorage] Downward API tests for local ephemeral storage should provide default limits.ephemeral-storage from node allocatable
Kubernetes e2e suite [It] [sig-storage] Dynamic Provisioning DynamicProvisioner Default should be disabled by changing the default annotation [Serial] [Disruptive]
Kubernetes e2e suite [It] [sig-storage] Dynamic Provisioning DynamicProvisioner Default should be disabled by removing the default annotation [Serial] [Disruptive]
Kubernetes e2e suite [It] [sig-storage] Dynamic Provisioning DynamicProvisioner Default should create and delete default persistent volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] Dynamic Provisioning DynamicProvisioner External should let an external dynamic provisioner create and delete persistent volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] Dynamic Provisioning DynamicProvisioner [Slow] [Feature:StorageProvider] deletion should be idempotent
Kubernetes e2e suite [It] [sig-storage] Dynamic Provisioning DynamicProvisioner [Slow] [Feature:StorageProvider] should provision storage with different parameters
Kubernetes e2e suite [It] [sig-storage] Dynamic Provisioning DynamicProvisioner [Slow] [Feature:StorageProvider] should provision storage with non-default reclaim policy Retain
Kubernetes e2e suite [It] [sig-storage] Dynamic Provisioning DynamicProvisioner [Slow] [Feature:StorageProvider] should test that deleting a claim before the volume is provisioned deletes the volume.
Kubernetes e2e suite [It] [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]
Kubernetes e2e suite [It] [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for git_repo [Serial] [Slow]
Kubernetes e2e suite [It] [sig-storage] Flexvolumes should be mountable when attachable [Feature:Flexvolumes]
Kubernetes e2e suite [It] [sig-storage] Flexvolumes should be mountable when non-attachable
Kubernetes e2e suite [It] [sig-storage] GKE local SSD [Feature:GKELocalSSD] should write and read from node local SSD [Feature:GKELocalSSD]
Kubernetes e2e suite [It] [sig-storage] GenericPersistentVolume[Disruptive] When kubelet restarts Should test that a file written to the mount before kubelet restart is readable after restart.
Kubernetes e2e suite [It] [sig-storage] GenericPersistentVolume[Disruptive] When kubelet restarts Should test that a volume mounted to a pod that is deleted while the kubelet is down unmounts when the kubelet returns.
Kubernetes e2e suite [It] [sig-storage] GenericPersistentVolume[Disruptive] When kubelet restarts Should test that a volume mounted to a pod that is force deleted while the kubelet is down unmounts when the kubelet returns.
Kubernetes e2e suite [It] [sig-storage] HostPathType Block Device [Slow] Should be able to mount block device 'ablkdev' successfully when HostPathType is HostPathBlockDev
Kubernetes e2e suite [It] [sig-storage] HostPathType Block Device [Slow] Should be able to mount block device 'ablkdev' successfully when HostPathType is HostPathUnset
Kubernetes e2e suite [It] [sig-storage] HostPathType Block Device [Slow] Should fail on mounting block device 'ablkdev' when HostPathType is HostPathCharDev
Kubernetes e2e suite [It] [sig-storage] HostPathType Block Device [Slow] Should fail on mounting block device 'ablkdev' when HostPathType is HostPathDirectory
Kubernetes e2e suite [It] [sig-storage] HostPathType Block Device [Slow] Should fail on mounting block device 'ablkdev' when HostPathType is HostPathFile
Kubernetes e2e suite [It] [sig-storage] HostPathType Block Device [Slow] Should fail on mounting block device 'ablkdev' when HostPathType is HostPathSocket
Kubernetes e2e suite [It] [sig-storage] HostPathType Block Device [Slow] Should fail on mounting non-existent block device 'does-not-exist-blk-dev' when HostPathType is HostPathBlockDev
Kubernetes e2e suite [It] [sig-storage] HostPathType Character Device [Slow] Should be able to mount character device 'achardev' successfully when HostPathType is HostPathCharDev
Kubernetes e2e suite [It] [sig-storage] HostPathType Character Device [Slow] Should be able to mount character device 'achardev' successfully when HostPathType is HostPathUnset
Kubernetes e2e suite [It] [sig-storage] HostPathType Character Device [Slow] Should fail on mounting character device 'achardev' when HostPathType is HostPathBlockDev
Kubernetes e2e suite [It] [sig-storage] HostPathType Character Device [Slow] Should fail on mounting character device 'achardev' when HostPathType is HostPathDirectory
Kubernetes e2e suite [It] [sig-storage] HostPathType Character Device [Slow] Should fail on mounting character device 'achardev' when HostPathType is HostPathFile
Kubernetes e2e suite [It] [sig-storage] HostPathType Character Device [Slow] Should fail on mounting character device 'achardev' when HostPathType is HostPathSocket
Kubernetes e2e suite [It] [sig-storage] HostPathType Character Device [Slow] Should fail on mounting non-existent character device 'does-not-exist-char-dev' when HostPathType is HostPathCharDev
Kubernetes e2e suite [It] [sig-storage] HostPathType Directory [Slow] Should be able to mount directory 'adir' successfully when HostPathType is HostPathDirectory
Kubernetes e2e suite [It] [sig-storage] HostPathType Directory [Slow] Should be able to mount directory 'adir' successfully when HostPathType is HostPathUnset
Kubernetes e2e suite [It] [sig-storage] HostPathType Directory [Slow] Should fail on mounting directory 'adir' when HostPathType is HostPathBlockDev
Kubernetes e2e suite [It] [sig-storage] HostPathType Directory [Slow] Should fail on mounting directory 'adir' when HostPathType is HostPathCharDev
Kubernetes e2e suite [It] [sig-storage] HostPathType Directory [Slow] Should fail on mounting directory 'adir' when HostPathType is HostPathFile
Kubernetes e2e suite [It] [sig-storage] HostPathType Directory [Slow] Should fail on mounting directory 'adir' when HostPathType is HostPathSocket
Kubernetes e2e suite [It] [sig-storage] HostPathType Directory [Slow] Should fail on mounting non-existent directory 'does-not-exist-dir' when HostPathType is HostPathDirectory
Kubernetes e2e suite [It] [sig-storage] HostPathType File [Slow] Should be able to mount file 'afile' successfully when HostPathType is HostPathFile
Kubernetes e2e suite [It] [sig-storage] HostPathType File [Slow] Should be able to mount file 'afile' successfully when HostPathType is HostPathUnset
Kubernetes e2e suite [It] [sig-storage] HostPathType File [Slow] Should fail on mounting file 'afile' when HostPathType is HostPathBlockDev
Kubernetes e2e suite [It] [sig-storage] HostPathType File [Slow] Should fail on mounting file 'afile' when HostPathType is HostPathCharDev
Kubernetes e2e suite [It] [sig-storage] HostPathType File [Slow] Should fail on mounting file 'afile' when HostPathType is HostPathDirectory
Kubernetes e2e suite [It] [sig-storage] HostPathType File [Slow] Should fail on mounting file 'afile' when HostPathType is HostPathSocket
Kubernetes e2e suite [It] [sig-storage] HostPathType File [Slow] Should fail on mounting non-existent file 'does-not-exist-file' when HostPathType is HostPathFile
Kubernetes e2e suite [It] [sig-storage] HostPathType Socket [Slow] Should be able to mount socket 'asocket' successfully when HostPathType is HostPathSocket
Kubernetes e2e suite [It] [sig-storage] HostPathType Socket [Slow] Should be able to mount socket 'asocket' successfully when HostPathType is HostPathUnset
Kubernetes e2e suite [It] [sig-storage] HostPathType Socket [Slow] Should fail on mounting non-existent socket 'does-not-exist-socket' when HostPathType is HostPathSocket
Kubernetes e2e suite [It] [sig-storage] HostPathType Socket [Slow] Should fail on mounting socket 'asocket' when HostPathType is HostPathBlockDev
Kubernetes e2e suite [It] [sig-storage] HostPathType Socket [Slow] Should fail on mounting socket 'asocket' when HostPathType is HostPathCharDev
Kubernetes e2e suite [It] [sig-storage] HostPathType Socket [Slow] Should fail on mounting socket 'asocket' when HostPathType is HostPathDirectory
Kubernetes e2e suite [It] [sig-storage] HostPathType Socket [Slow] Should fail on mounting socket 'asocket' when HostPathType is HostPathFile
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] provisioning should mount multiple PV pointing to the same storage on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with any volume data source [Serial]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with mount options
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with pvc data source
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with pvc data source in parallel [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] volume-stress multiple pods should access different volumes repeatedly [Slow] [Serial]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] capacity provides storage capacity information
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with different fsgroup applied to the volume contents
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with same fsgroup applied to the volume contents
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with different fsgroup applied to the volume contents
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with same fsgroup skips ownership changes to the volume contents
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] provisioning should mount multiple PV pointing to the same storage on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with any volume data source [Serial]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with mount options
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source in parallel [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] volume-stress multiple pods should access different volumes repeatedly [Slow] [Serial]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (delayed binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (delayed binding)] topology should provision a volume and schedule a pod with AllowedTopologies
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ext3)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ext4)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down is usable by a new pod when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down is usable by a new pod with a different SELinux context when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down is usable by a new pod when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down is usable by a new pod with a different SELinux context when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits should support volume limits [Serial]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits should verify that all csinodes have volume limits
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (immediate binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (immediate binding)] topology should provision a volume and schedule a pod with AllowedTopologies
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should mount multiple PV pointing to the same storage on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with any volume data source [Serial]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with mount options
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with pvc data source
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with pvc data source in parallel [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support existing directory
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support non-existent path
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (xfs)][Slow] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should create read-only inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should create read/write inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should support expansion of pvcs created for ephemeral pvcs
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should support multiple inline ephemeral volumes
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should support two pods which have the same volume definition
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should create read-only inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should create read/write inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should support expansion of pvcs created for ephemeral pvcs
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should support multiple inline ephemeral volumes
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should support two pods which have the same volume definition
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should create read-only inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should create read/write inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support expansion of pvcs created for ephemeral pvcs
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support multiple inline ephemeral volumes
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support two pods which have the same volume definition
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Generic Ephemeral-volume (default fs)] volumeLimits should support volume limits [Serial]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Generic Ephemeral-volume (default fs)] volumeLimits should verify that all csinodes have volume limits
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should support existing directory
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (ext3)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (ext4)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (xfs)][Slow] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (ext3)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (ext4)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down is usable by a new pod when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down is usable by a new pod with a different SELinux context when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down is usable by a new pod when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down is usable by a new pod with a different SELinux context when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (xfs)][Slow] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (block volmode)] provisioning should mount multiple PV pointing to the same storage on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with any volume data source [Serial]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with mount options
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with pvc data source
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with pvc data source in parallel [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (block volmode)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (block volmode)] volume-stress multiple pods should access different volumes repeatedly [Slow] [Serial]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (block volmode)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] capacity provides storage capacity information
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with different fsgroup applied to the volume contents
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with same fsgroup applied to the volume contents
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with different fsgroup applied to the volume contents
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with same fsgroup skips ownership changes to the volume contents
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] provisioning should mount multiple PV pointing to the same storage on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with any volume data source [Serial]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with mount options
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source in parallel [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] volume-stress multiple pods should access different volumes repeatedly [Slow] [Serial]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (delayed binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (delayed binding)] topology should provision a volume and schedule a pod with AllowedTopologies
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ext3)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ext4)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down is usable by a new pod when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down is usable by a new pod with a different SELinux context when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down is usable by a new pod when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down is usable by a new pod with a different SELinux context when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits should support volume limits [Serial]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits should verify that all csinodes have volume limits
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (immediate binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (immediate binding)] topology should provision a volume and schedule a pod with AllowedTopologies
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should mount multiple PV pointing to the same storage on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with any volume data source [Serial]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with mount options
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with pvc data source
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with pvc data source in parallel [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support existing directory
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support non-existent path
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (xfs)][Slow] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should create read-only inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should create read/write inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should support expansion of pvcs created for ephemeral pvcs
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should support multiple inline ephemeral volumes
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should support two pods which have the same volume definition
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should create read-only inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should create read/write inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should support expansion of pvcs created for ephemeral pvcs
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should support multiple inline ephemeral volumes
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should support two pods which have the same volume definition
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should create read-only inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should create read/write inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support expansion of pvcs created for ephemeral pvcs
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support multiple inline ephemeral volumes
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support two pods which have the same volume definition
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Generic Ephemeral-volume (default fs)] volumeLimits should support volume limits [Serial]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Generic Ephemeral-volume (default fs)] volumeLimits should verify that all csinodes have volume limits
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (default fs)] subPath should support existing directory
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (default fs)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (ext3)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (ext4)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (xfs)][Slow] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (block volmode)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (ext3)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (ext4)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down is usable by a new pod when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down is usable by a new pod with a different SELinux context when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down is usable by a new pod when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down is usable by a new pod with a different SELinux context when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (xfs)][Slow] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (block volmode)] provisioning should mount multiple PV pointing to the same storage on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with any volume data source [Serial]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with mount options
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with pvc data source
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with pvc data source in parallel [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (block volmode)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (block volmode)] volume-stress multiple pods should access different volumes repeatedly [Slow] [Serial]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (block volmode)] volumeMode should fail in binding dynamic provisioned PV to PVC [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (block volmode)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] capacity provides storage capacity information
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with different fsgroup applied to the volume contents
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with same fsgroup applied to the volume contents
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with different fsgroup applied to the volume contents
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with same fsgroup skips ownership changes to the volume contents
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] provisioning should mount multiple PV pointing to the same storage on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with any volume data source [Serial]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with mount options
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source in parallel [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] volume-stress multiple pods should access different volumes repeatedly [Slow] [Serial]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (delayed binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (delayed binding)] topology should provision a volume and schedule a pod with AllowedTopologies
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ext3)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ext4)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down is usable by a new pod when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down is usable by a new pod with a different SELinux context when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down is usable by a new pod when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down is usable by a new pod with a different SELinux context when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits should support volume limits [Serial]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits should verify that all csinodes have volume limits
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (immediate binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (immediate binding)] topology should provision a volume and schedule a pod with AllowedTopologies
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should mount multiple PV pointing to the same storage on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with any volume data source [Serial]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with mount options
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with pvc data source
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with pvc data source in parallel [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support existing directory
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support non-existent path
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (xfs)][Slow] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should create read-only inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should create read/write inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should support expansion of pvcs created for ephemeral pvcs
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should support multiple inline ephemeral volumes
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should support two pods which have the same volume definition
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should create read-only inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should create read/write inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should support expansion of pvcs created for ephemeral pvcs
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should support multiple inline ephemeral volumes
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should support two pods which have the same volume definition
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should create read-only inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should create read/write inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support expansion of pvcs created for ephemeral pvcs
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support multiple inline ephemeral volumes
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support two pods which have the same volume definition
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Generic Ephemeral-volume (default fs)] volumeLimits should support volume limits [Serial]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Generic Ephemeral-volume (default fs)] volumeLimits should verify that all csinodes have volume limits
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Inline-volume (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Inline-volume (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Inline-volume (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Inline-volume (default fs)] subPath should support existing directory
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Inline-volume (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Inline-volume (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Inline-volume (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Inline-volume (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Inline-volume (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Inline-volume (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Inline-volume (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Inline-volume (default fs)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Inline-volume (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Inline-volume (ext3)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Inline-volume (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Inline-volume (ext4)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Inline-volume (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Inline-volume (xfs)][Slow] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should fail to create pod by failing to mount volume [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (block volmode)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (ext3)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (ext4)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down is usable by a new pod when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down is usable by a new pod with a different SELinux context when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down is usable by a new pod when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down is usable by a new pod with a different SELinux context when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (xfs)][Slow] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (block volmode)] provisioning should mount multiple PV pointing to the same storage on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with any volume data source [Serial]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with mount options
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with pvc data source
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with pvc data source in parallel [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (block volmode)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (block volmode)] volume-stress multiple pods should access different volumes repeatedly [Slow] [Serial]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (block volmode)] volumeMode should fail in binding dynamic provisioned PV to PVC [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (block volmode)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] capacity provides storage capacity information
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with different fsgroup applied to the volume contents
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with same fsgroup applied to the volume contents
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with different fsgroup applied to the volume contents
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with same fsgroup skips ownership changes to the volume contents
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] provisioning should mount multiple PV pointing to the same storage on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with any volume data source [Serial]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with mount options
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source in parallel [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] volume-stress multiple pods should access different volumes repeatedly [Slow] [Serial]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (delayed binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (delayed binding)] topology should provision a volume and schedule a pod with AllowedTopologies
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ext3)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ext4)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down is usable by a new pod when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down is usable by a new pod with a different SELinux context when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down is usable by a new pod when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down is usable by a new pod with a different SELinux context when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits should support volume limits [Serial]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits should verify that all csinodes have volume limits
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (immediate binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (immediate binding)] topology should provision a volume and schedule a pod with AllowedTopologies
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should mount multiple PV pointing to the same storage on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with any volume data source [Serial]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with mount options
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with pvc data source
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with pvc data source in parallel [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support existing directory
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support non-existent path
Kubernetes e2e