Recent runs || View in Spyglass
Result | SUCCESS |
Tests | 12 failed / 50 succeeded |
Started | |
Elapsed | 3h45m |
Revision | release-1.7 |
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-api\-machinery\]\sGarbage\scollector\sshould\sdelete\sjobs\sand\spods\screated\sby\scronjob$'
test/e2e/framework/framework.go:188 Jan 30 02:31:01.010: All nodes should be ready after test, Not ready nodes: ", capz-conf-zghb7" vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113from junit.kubetest.01.xml
[BeforeEach] [sig-api-machinery] Garbage collector test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 30 02:27:35.022: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename gc �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should delete jobs and pods created by cronjob test/e2e/apimachinery/garbage_collector.go:1145 �[1mSTEP�[0m: Create the cronjob �[1mSTEP�[0m: Wait for the CronJob to create new Job �[1mSTEP�[0m: Delete the cronjob �[1mSTEP�[0m: Verify if cronjob does not leave jobs nor pods behind �[1mSTEP�[0m: Gathering metrics Jan 30 02:28:00.607: INFO: The status of Pod kube-controller-manager-capz-conf-x2a841-control-plane-kn8c7 is Running (Ready = true) Jan 30 02:28:00.909: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector test/e2e/framework/framework.go:188 Jan 30 02:28:00.909: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready Jan 30 02:28:00.944: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:28:02.978: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:28:04.979: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:28:06.977: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:28:08.979: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:28:10.978: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:28:12.979: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:28:14.978: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:28:16.978: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:28:18.978: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:28:20.977: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:28:22.978: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:28:24.982: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:28:26.977: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:28:28.977: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:28:30.978: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:28:32.979: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:28:34.978: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:28:36.977: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:28:38.977: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:28:40.978: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:28:42.977: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:28:44.977: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:28:46.978: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:28:48.978: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:28:50.979: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:28:52.977: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:28:54.979: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:28:56.977: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:28:58.978: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:29:00.978: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:29:02.978: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:29:04.977: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:29:06.977: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:29:08.980: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:29:10.978: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:29:12.980: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:29:14.978: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:29:16.978: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:29:18.977: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:29:20.977: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:29:22.978: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:29:24.978: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:29:26.977: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:29:28.979: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:29:30.978: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:29:32.978: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:29:34.978: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:29:36.979: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:29:38.978: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:29:40.978: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:29:42.983: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:29:44.977: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:29:46.977: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:29:48.977: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:29:50.978: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:29:52.978: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:29:54.977: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:29:56.976: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:29:58.978: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:30:00.978: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:30:02.978: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:30:04.977: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:30:06.977: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:30:08.977: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:30:10.977: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:30:12.979: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:30:14.978: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:30:16.978: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:30:18.980: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:30:20.977: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:30:22.976: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:30:24.977: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:30:26.980: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:30:28.978: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:30:30.978: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:30:32.978: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:30:34.977: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:30:36.984: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:30:38.993: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:30:40.978: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:30:42.978: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:30:44.978: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:30:46.977: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:30:48.979: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:30:50.978: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:30:52.979: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:30:54.979: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:30:56.978: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:30:58.978: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:31:00.978: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:31:01.010: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:31:01.010: FAIL: All nodes should be ready after test, Not ready nodes: ", capz-conf-zghb7" Full Stack Trace k8s.io/kubernetes/test/e2e.RunE2ETests(0x25761d7?) test/e2e/e2e.go:130 +0x6bb k8s.io/kubernetes/test/e2e.TestE2E(0x24e52d9?) test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc000583a00, 0x741f9a8) /usr/local/go/src/testing/testing.go:1446 +0x10b created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1493 +0x35f �[1mSTEP�[0m: Destroying namespace "gc-4417" for this suite.
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-api\-machinery\]\sGarbage\scollector\sshould\snot\sdelete\sdependents\sthat\shave\sboth\svalid\sowner\sand\sowner\sthat\'s\swaiting\sfor\sdependents\sto\sbe\sdeleted\s\[Conformance\]$'
test/e2e/framework/framework.go:188 Jan 30 02:23:26.900: All nodes should be ready after test, Not ready nodes: ", capz-conf-zghb7" vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113from junit.kubetest.01.xml
[BeforeEach] [sig-api-machinery] Garbage collector test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 30 02:20:13.608: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename gc �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] test/e2e/framework/framework.go:652 Jan 30 02:20:13.864: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure �[1mSTEP�[0m: create the rc1 �[1mSTEP�[0m: create the rc2 �[1mSTEP�[0m: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well �[1mSTEP�[0m: delete the rc simpletest-rc-to-be-deleted �[1mSTEP�[0m: wait for the rc to be deleted �[1mSTEP�[0m: Gathering metrics Jan 30 02:20:25.268: INFO: The status of Pod kube-controller-manager-capz-conf-x2a841-control-plane-kn8c7 is Running (Ready = true) Jan 30 02:20:25.616: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: Jan 30 02:20:25.616: INFO: Deleting pod "simpletest-rc-to-be-deleted-4dxw5" in namespace "gc-7880" Jan 30 02:20:25.663: INFO: Deleting pod "simpletest-rc-to-be-deleted-4vzkw" in namespace "gc-7880" Jan 30 02:20:25.704: INFO: Deleting pod "simpletest-rc-to-be-deleted-587x5" in namespace "gc-7880" Jan 30 02:20:25.747: INFO: Deleting pod "simpletest-rc-to-be-deleted-5ms2p" in namespace "gc-7880" Jan 30 02:20:25.786: INFO: Deleting pod "simpletest-rc-to-be-deleted-5x8bd" in namespace "gc-7880" Jan 30 02:20:25.833: INFO: Deleting pod "simpletest-rc-to-be-deleted-65q6x" in namespace "gc-7880" Jan 30 02:20:25.875: INFO: Deleting pod "simpletest-rc-to-be-deleted-7h5n9" in namespace "gc-7880" Jan 30 02:20:25.918: INFO: Deleting pod "simpletest-rc-to-be-deleted-7m98n" in namespace "gc-7880" Jan 30 02:20:25.962: INFO: Deleting pod "simpletest-rc-to-be-deleted-7m9p6" in namespace "gc-7880" Jan 30 02:20:26.002: INFO: Deleting pod "simpletest-rc-to-be-deleted-84qqc" in namespace "gc-7880" Jan 30 02:20:26.047: INFO: Deleting pod "simpletest-rc-to-be-deleted-8kv79" in namespace "gc-7880" Jan 30 02:20:26.090: INFO: Deleting pod "simpletest-rc-to-be-deleted-8qwwh" in namespace "gc-7880" Jan 30 02:20:26.137: INFO: Deleting pod "simpletest-rc-to-be-deleted-9lhb4" in namespace "gc-7880" Jan 30 02:20:26.179: INFO: Deleting pod "simpletest-rc-to-be-deleted-9wqzf" in namespace "gc-7880" Jan 30 02:20:26.224: INFO: Deleting pod "simpletest-rc-to-be-deleted-bh7hb" in namespace "gc-7880" Jan 30 02:20:26.262: INFO: Deleting pod "simpletest-rc-to-be-deleted-bhmlw" in namespace "gc-7880" Jan 30 02:20:26.305: INFO: Deleting pod "simpletest-rc-to-be-deleted-bjp6q" in namespace "gc-7880" Jan 30 02:20:26.349: INFO: Deleting pod "simpletest-rc-to-be-deleted-cqwdp" in namespace "gc-7880" Jan 30 02:20:26.393: INFO: Deleting pod "simpletest-rc-to-be-deleted-crmn7" in namespace "gc-7880" Jan 30 02:20:26.431: INFO: Deleting pod "simpletest-rc-to-be-deleted-fcm8q" in namespace "gc-7880" Jan 30 02:20:26.473: INFO: Deleting pod "simpletest-rc-to-be-deleted-fwb6w" in namespace "gc-7880" Jan 30 02:20:26.519: INFO: Deleting pod "simpletest-rc-to-be-deleted-gcms9" in namespace "gc-7880" Jan 30 02:20:26.563: INFO: Deleting pod "simpletest-rc-to-be-deleted-gtq8z" in namespace "gc-7880" Jan 30 02:20:26.606: INFO: Deleting pod "simpletest-rc-to-be-deleted-j4gjf" in namespace "gc-7880" Jan 30 02:20:26.652: INFO: Deleting pod "simpletest-rc-to-be-deleted-j5n6f" in namespace "gc-7880" Jan 30 02:20:26.702: INFO: Deleting pod "simpletest-rc-to-be-deleted-jbjrp" in namespace "gc-7880" Jan 30 02:20:26.745: INFO: Deleting pod "simpletest-rc-to-be-deleted-jtfxz" in namespace "gc-7880" [AfterEach] [sig-api-machinery] Garbage collector test/e2e/framework/framework.go:188 Jan 30 02:20:26.792: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready Jan 30 02:20:26.832: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:20:28.866: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:20:30.866: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:20:32.867: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:20:34.865: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:20:36.866: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:20:38.866: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:20:40.867: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:20:42.868: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:20:44.868: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:20:46.865: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:20:48.865: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:20:50.867: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:20:52.865: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:20:54.866: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:20:56.866: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:20:58.866: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:21:00.867: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:21:02.867: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:21:04.866: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:21:06.865: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:21:08.866: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:21:10.866: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:21:12.866: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:21:14.866: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:21:16.865: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:21:18.865: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:21:20.866: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:21:22.866: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:21:24.867: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:21:26.866: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:21:28.866: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:21:30.865: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:21:32.867: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:21:34.866: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:21:36.866: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:21:38.870: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:21:40.866: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:21:42.867: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:21:44.866: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:21:46.866: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:21:48.866: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:21:50.866: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:21:52.867: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:21:54.865: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:21:56.865: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:21:58.866: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:22:00.865: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:22:02.866: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:22:04.866: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:22:06.866: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:22:08.866: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:22:10.866: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:22:12.867: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:22:14.866: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:22:16.867: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:22:18.867: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:22:20.866: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:22:22.866: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:22:24.866: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:22:26.867: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:22:28.867: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:22:30.870: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:22:32.867: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:22:34.867: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:22:36.865: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:22:38.866: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:22:40.865: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:22:42.865: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:22:44.865: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:22:46.867: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:22:48.867: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:22:50.866: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:22:52.867: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:22:54.866: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:22:56.866: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:22:58.866: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:23:00.867: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:23:02.865: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:23:04.867: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:23:06.867: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:23:08.866: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:23:10.866: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:23:12.866: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:23:14.868: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:23:16.866: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:23:18.867: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:23:20.867: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:23:22.868: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:23:24.867: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:23:26.867: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:23:26.900: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:23:26.900: FAIL: All nodes should be ready after test, Not ready nodes: ", capz-conf-zghb7" Full Stack Trace k8s.io/kubernetes/test/e2e.RunE2ETests(0x25761d7?) test/e2e/e2e.go:130 +0x6bb k8s.io/kubernetes/test/e2e.TestE2E(0x24e52d9?) test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc000583a00, 0x741f9a8) /usr/local/go/src/testing/testing.go:1446 +0x10b created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1493 +0x35f �[1mSTEP�[0m: Destroying namespace "gc-7880" for this suite.
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-api\-machinery\]\sGarbage\scollector\sshould\sorphan\sRS\screated\sby\sdeployment\swhen\sdeleteOptions\.PropagationPolicy\sis\sOrphan\s\[Conformance\]$'
test/e2e/framework/framework.go:188 Jan 30 01:26:57.658: All nodes should be ready after test, Not ready nodes: ", capz-conf-zghb7" vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113from junit.kubetest.01.xml
[BeforeEach] [sig-api-machinery] Garbage collector test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 30 01:23:55.884: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename gc �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] test/e2e/framework/framework.go:652 �[1mSTEP�[0m: create the deployment �[1mSTEP�[0m: Wait for the Deployment to create new ReplicaSet �[1mSTEP�[0m: delete the deployment �[1mSTEP�[0m: wait for deployment deletion to see if the garbage collector mistakenly deletes the rs �[1mSTEP�[0m: Gathering metrics Jan 30 01:23:57.231: INFO: The status of Pod kube-controller-manager-capz-conf-x2a841-control-plane-kn8c7 is Running (Ready = true) Jan 30 01:23:57.547: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector test/e2e/framework/framework.go:188 Jan 30 01:23:57.547: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready Jan 30 01:23:57.581: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:23:59.616: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:24:01.615: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:24:03.617: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:24:05.616: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:24:07.617: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:24:09.615: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:24:11.616: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:24:13.615: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:24:15.615: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:24:17.616: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:24:19.616: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:24:21.615: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:24:23.616: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:24:25.616: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:24:27.616: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:24:29.616: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:24:31.617: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:24:33.615: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:24:35.616: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:24:37.616: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:24:39.616: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:24:41.616: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:24:43.615: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:24:45.619: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:24:47.626: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:24:49.617: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:24:51.616: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:24:53.615: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:24:55.615: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:24:57.616: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:24:59.617: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:25:01.616: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:25:03.616: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:25:05.616: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:25:07.618: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:25:09.617: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:25:11.615: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:25:13.616: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:25:15.615: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:25:17.618: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:25:19.617: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:25:21.617: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:25:23.616: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:25:25.616: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:25:27.618: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:25:29.617: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:25:31.616: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:25:33.624: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:25:35.615: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:25:37.616: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:25:39.616: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:25:41.617: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:25:43.618: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:25:45.616: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:25:47.618: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:25:49.617: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:25:51.616: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:25:53.615: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:25:55.616: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:25:57.617: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:25:59.618: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:26:01.615: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:26:03.616: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:26:05.616: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:26:07.622: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:26:09.616: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:26:11.617: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:26:13.617: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:26:15.615: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:26:17.616: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:26:19.616: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:26:21.617: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:26:23.617: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:26:25.615: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:26:27.615: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:26:29.615: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:26:31.617: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:26:33.617: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:26:35.616: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:26:37.616: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:26:39.615: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:26:41.617: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:26:43.616: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:26:45.615: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:26:47.619: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:26:49.617: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:26:51.616: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:26:53.620: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:26:55.615: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:26:57.622: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:26:57.657: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:26:57.658: FAIL: All nodes should be ready after test, Not ready nodes: ", capz-conf-zghb7" Full Stack Trace k8s.io/kubernetes/test/e2e.RunE2ETests(0x25761d7?) test/e2e/e2e.go:130 +0x6bb k8s.io/kubernetes/test/e2e.TestE2E(0x24e52d9?) test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc000583a00, 0x741f9a8) /usr/local/go/src/testing/testing.go:1446 +0x10b created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1493 +0x35f �[1mSTEP�[0m: Destroying namespace "gc-6226" for this suite.
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-apps\]\sDaemon\sset\s\[Serial\]\sshould\slist\sand\sdelete\sa\scollection\sof\sDaemonSets\s\[Conformance\]$'
test/e2e/framework/framework.go:188 Jan 30 01:20:55.359: All nodes should be ready after test, Not ready nodes: ", capz-conf-zghb7" vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113from junit.kubetest.01.xml
[BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 30 01:16:54.365: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename daemonsets �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:145 [It] should list and delete a collection of DaemonSets [Conformance] test/e2e/framework/framework.go:652 �[1mSTEP�[0m: Creating simple DaemonSet "daemon-set" �[1mSTEP�[0m: Check that daemon pods launch on every node of the cluster. Jan 30 01:16:54.774: INFO: DaemonSet pods can't tolerate node capz-conf-x2a841-control-plane-kn8c7 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 30 01:16:54.805: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 30 01:16:54.805: INFO: Node capz-conf-mcf4n is running 0 daemon pod, expected 1 Jan 30 01:16:55.836: INFO: DaemonSet pods can't tolerate node capz-conf-x2a841-control-plane-kn8c7 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 30 01:16:55.866: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 30 01:16:55.866: INFO: Node capz-conf-mcf4n is running 0 daemon pod, expected 1 Jan 30 01:16:56.841: INFO: DaemonSet pods can't tolerate node capz-conf-x2a841-control-plane-kn8c7 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 30 01:16:56.870: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 30 01:16:56.870: INFO: Node capz-conf-mcf4n is running 0 daemon pod, expected 1 Jan 30 01:16:57.837: INFO: DaemonSet pods can't tolerate node capz-conf-x2a841-control-plane-kn8c7 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 30 01:16:57.867: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 30 01:16:57.867: INFO: Node capz-conf-mcf4n is running 0 daemon pod, expected 1 Jan 30 01:16:58.837: INFO: DaemonSet pods can't tolerate node capz-conf-x2a841-control-plane-kn8c7 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 30 01:16:58.867: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 30 01:16:58.867: INFO: Node capz-conf-mcf4n is running 0 daemon pod, expected 1 Jan 30 01:16:59.844: INFO: DaemonSet pods can't tolerate node capz-conf-x2a841-control-plane-kn8c7 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 30 01:16:59.875: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 30 01:16:59.875: INFO: Node capz-conf-mcf4n is running 0 daemon pod, expected 1 Jan 30 01:17:00.838: INFO: DaemonSet pods can't tolerate node capz-conf-x2a841-control-plane-kn8c7 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 30 01:17:00.868: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 30 01:17:00.868: INFO: Node capz-conf-mcf4n is running 0 daemon pod, expected 1 Jan 30 01:17:01.837: INFO: DaemonSet pods can't tolerate node capz-conf-x2a841-control-plane-kn8c7 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 30 01:17:01.866: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 30 01:17:01.866: INFO: Node capz-conf-mcf4n is running 0 daemon pod, expected 1 Jan 30 01:17:02.837: INFO: DaemonSet pods can't tolerate node capz-conf-x2a841-control-plane-kn8c7 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 30 01:17:02.892: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 30 01:17:02.892: INFO: Node capz-conf-mcf4n is running 0 daemon pod, expected 1 Jan 30 01:17:03.837: INFO: DaemonSet pods can't tolerate node capz-conf-x2a841-control-plane-kn8c7 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 30 01:17:03.866: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 30 01:17:03.866: INFO: Node capz-conf-mcf4n is running 0 daemon pod, expected 1 Jan 30 01:17:04.837: INFO: DaemonSet pods can't tolerate node capz-conf-x2a841-control-plane-kn8c7 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 30 01:17:04.881: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 30 01:17:04.882: INFO: Node capz-conf-mcf4n is running 0 daemon pod, expected 1 Jan 30 01:17:05.838: INFO: DaemonSet pods can't tolerate node capz-conf-x2a841-control-plane-kn8c7 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 30 01:17:05.867: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 30 01:17:05.867: INFO: Node capz-conf-mcf4n is running 0 daemon pod, expected 1 Jan 30 01:17:06.840: INFO: DaemonSet pods can't tolerate node capz-conf-x2a841-control-plane-kn8c7 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 30 01:17:06.869: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 30 01:17:06.869: INFO: Node capz-conf-mcf4n is running 0 daemon pod, expected 1 Jan 30 01:17:07.837: INFO: DaemonSet pods can't tolerate node capz-conf-x2a841-control-plane-kn8c7 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 30 01:17:07.866: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 30 01:17:07.866: INFO: Node capz-conf-mcf4n is running 0 daemon pod, expected 1 Jan 30 01:17:08.837: INFO: DaemonSet pods can't tolerate node capz-conf-x2a841-control-plane-kn8c7 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 30 01:17:08.866: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 30 01:17:08.866: INFO: Node capz-conf-mcf4n is running 0 daemon pod, expected 1 Jan 30 01:17:09.837: INFO: DaemonSet pods can't tolerate node capz-conf-x2a841-control-plane-kn8c7 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 30 01:17:09.866: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 30 01:17:09.866: INFO: Node capz-conf-mcf4n is running 0 daemon pod, expected 1 Jan 30 01:17:10.837: INFO: DaemonSet pods can't tolerate node capz-conf-x2a841-control-plane-kn8c7 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 30 01:17:10.867: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 30 01:17:10.867: INFO: Node capz-conf-mcf4n is running 0 daemon pod, expected 1 Jan 30 01:17:11.836: INFO: DaemonSet pods can't tolerate node capz-conf-x2a841-control-plane-kn8c7 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 30 01:17:11.866: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 30 01:17:11.866: INFO: Node capz-conf-mcf4n is running 0 daemon pod, expected 1 Jan 30 01:17:12.837: INFO: DaemonSet pods can't tolerate node capz-conf-x2a841-control-plane-kn8c7 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 30 01:17:12.866: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 30 01:17:12.866: INFO: Node capz-conf-mcf4n is running 0 daemon pod, expected 1 Jan 30 01:17:13.836: INFO: DaemonSet pods can't tolerate node capz-conf-x2a841-control-plane-kn8c7 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 30 01:17:13.865: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 30 01:17:13.865: INFO: Node capz-conf-mcf4n is running 0 daemon pod, expected 1 Jan 30 01:17:14.845: INFO: DaemonSet pods can't tolerate node capz-conf-x2a841-control-plane-kn8c7 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 30 01:17:14.887: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 30 01:17:14.887: INFO: Node capz-conf-mcf4n is running 0 daemon pod, expected 1 Jan 30 01:17:15.837: INFO: DaemonSet pods can't tolerate node capz-conf-x2a841-control-plane-kn8c7 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 30 01:17:15.866: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 30 01:17:15.866: INFO: Node capz-conf-mcf4n is running 0 daemon pod, expected 1 Jan 30 01:17:16.837: INFO: DaemonSet pods can't tolerate node capz-conf-x2a841-control-plane-kn8c7 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 30 01:17:16.866: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 30 01:17:16.866: INFO: Node capz-conf-mcf4n is running 0 daemon pod, expected 1 Jan 30 01:17:17.837: INFO: DaemonSet pods can't tolerate node capz-conf-x2a841-control-plane-kn8c7 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 30 01:17:17.866: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 30 01:17:17.866: INFO: Node capz-conf-mcf4n is running 0 daemon pod, expected 1 Jan 30 01:17:18.838: INFO: DaemonSet pods can't tolerate node capz-conf-x2a841-control-plane-kn8c7 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 30 01:17:18.869: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 30 01:17:18.869: INFO: Node capz-conf-mcf4n is running 0 daemon pod, expected 1 Jan 30 01:17:19.837: INFO: DaemonSet pods can't tolerate node capz-conf-x2a841-control-plane-kn8c7 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 30 01:17:19.867: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 30 01:17:19.867: INFO: Node capz-conf-mcf4n is running 0 daemon pod, expected 1 Jan 30 01:17:20.837: INFO: DaemonSet pods can't tolerate node capz-conf-x2a841-control-plane-kn8c7 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 30 01:17:20.866: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 30 01:17:20.866: INFO: Node capz-conf-mcf4n is running 0 daemon pod, expected 1 Jan 30 01:17:21.837: INFO: DaemonSet pods can't tolerate node capz-conf-x2a841-control-plane-kn8c7 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 30 01:17:21.867: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 30 01:17:21.867: INFO: Node capz-conf-mcf4n is running 0 daemon pod, expected 1 Jan 30 01:17:22.837: INFO: DaemonSet pods can't tolerate node capz-conf-x2a841-control-plane-kn8c7 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 30 01:17:22.867: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 30 01:17:22.867: INFO: Node capz-conf-mcf4n is running 0 daemon pod, expected 1 Jan 30 01:17:23.837: INFO: DaemonSet pods can't tolerate node capz-conf-x2a841-control-plane-kn8c7 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 30 01:17:23.866: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 30 01:17:23.866: INFO: Node capz-conf-mcf4n is running 0 daemon pod, expected 1 Jan 30 01:17:24.837: INFO: DaemonSet pods can't tolerate node capz-conf-x2a841-control-plane-kn8c7 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 30 01:17:24.866: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 30 01:17:24.866: INFO: Node capz-conf-mcf4n is running 0 daemon pod, expected 1 Jan 30 01:17:25.837: INFO: DaemonSet pods can't tolerate node capz-conf-x2a841-control-plane-kn8c7 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 30 01:17:25.866: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 30 01:17:25.866: INFO: Node capz-conf-mcf4n is running 0 daemon pod, expected 1 Jan 30 01:17:26.837: INFO: DaemonSet pods can't tolerate node capz-conf-x2a841-control-plane-kn8c7 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 30 01:17:26.869: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 30 01:17:26.869: INFO: Node capz-conf-mcf4n is running 0 daemon pod, expected 1 Jan 30 01:17:27.837: INFO: DaemonSet pods can't tolerate node capz-conf-x2a841-control-plane-kn8c7 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 30 01:17:27.867: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 30 01:17:27.867: INFO: Node capz-conf-mcf4n is running 0 daemon pod, expected 1 Jan 30 01:17:28.837: INFO: DaemonSet pods can't tolerate node capz-conf-x2a841-control-plane-kn8c7 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 30 01:17:28.867: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 30 01:17:28.867: INFO: Node capz-conf-mcf4n is running 0 daemon pod, expected 1 Jan 30 01:17:29.838: INFO: DaemonSet pods can't tolerate node capz-conf-x2a841-control-plane-kn8c7 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 30 01:17:29.867: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 30 01:17:29.867: INFO: Node capz-conf-mcf4n is running 0 daemon pod, expected 1 Jan 30 01:17:30.838: INFO: DaemonSet pods can't tolerate node capz-conf-x2a841-control-plane-kn8c7 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 30 01:17:30.868: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 30 01:17:30.868: INFO: Node capz-conf-mcf4n is running 0 daemon pod, expected 1 Jan 30 01:17:31.837: INFO: DaemonSet pods can't tolerate node capz-conf-x2a841-control-plane-kn8c7 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 30 01:17:31.866: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 30 01:17:31.866: INFO: Node capz-conf-mcf4n is running 0 daemon pod, expected 1 Jan 30 01:17:32.837: INFO: DaemonSet pods can't tolerate node capz-conf-x2a841-control-plane-kn8c7 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 30 01:17:32.866: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 30 01:17:32.866: INFO: Node capz-conf-mcf4n is running 0 daemon pod, expected 1 Jan 30 01:17:33.838: INFO: DaemonSet pods can't tolerate node capz-conf-x2a841-control-plane-kn8c7 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 30 01:17:33.873: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 30 01:17:33.873: INFO: Node capz-conf-mcf4n is running 0 daemon pod, expected 1 Jan 30 01:17:34.837: INFO: DaemonSet pods can't tolerate node capz-conf-x2a841-control-plane-kn8c7 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 30 01:17:34.866: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 30 01:17:34.866: INFO: Node capz-conf-mcf4n is running 0 daemon pod, expected 1 Jan 30 01:17:35.836: INFO: DaemonSet pods can't tolerate node capz-conf-x2a841-control-plane-kn8c7 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 30 01:17:35.865: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 30 01:17:35.865: INFO: Node capz-conf-mcf4n is running 0 daemon pod, expected 1 Jan 30 01:17:36.837: INFO: DaemonSet pods can't tolerate node capz-conf-x2a841-control-plane-kn8c7 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 30 01:17:36.866: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 30 01:17:36.866: INFO: Node capz-conf-mcf4n is running 0 daemon pod, expected 1 Jan 30 01:17:37.837: INFO: DaemonSet pods can't tolerate node capz-conf-x2a841-control-plane-kn8c7 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 30 01:17:37.868: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 30 01:17:37.868: INFO: Node capz-conf-mcf4n is running 0 daemon pod, expected 1 Jan 30 01:17:38.837: INFO: DaemonSet pods can't tolerate node capz-conf-x2a841-control-plane-kn8c7 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 30 01:17:38.892: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 30 01:17:38.892: INFO: Node capz-conf-mcf4n is running 0 daemon pod, expected 1 Jan 30 01:17:39.838: INFO: DaemonSet pods can't tolerate node capz-conf-x2a841-control-plane-kn8c7 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 30 01:17:39.867: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Jan 30 01:17:39.867: INFO: Node capz-conf-mcf4n is running 0 daemon pod, expected 1 Jan 30 01:17:40.838: INFO: DaemonSet pods can't tolerate node capz-conf-x2a841-control-plane-kn8c7 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 30 01:17:40.868: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Jan 30 01:17:40.868: INFO: Node capz-conf-zghb7 is running 0 daemon pod, expected 1 Jan 30 01:17:41.837: INFO: DaemonSet pods can't tolerate node capz-conf-x2a841-control-plane-kn8c7 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 30 01:17:41.867: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Jan 30 01:17:41.867: INFO: Node capz-conf-zghb7 is running 0 daemon pod, expected 1 Jan 30 01:17:42.837: INFO: DaemonSet pods can't tolerate node capz-conf-x2a841-control-plane-kn8c7 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 30 01:17:42.867: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Jan 30 01:17:42.867: INFO: Node capz-conf-zghb7 is running 0 daemon pod, expected 1 Jan 30 01:17:43.846: INFO: DaemonSet pods can't tolerate node capz-conf-x2a841-control-plane-kn8c7 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 30 01:17:43.884: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Jan 30 01:17:43.884: INFO: Node capz-conf-zghb7 is running 0 daemon pod, expected 1 Jan 30 01:17:44.845: INFO: DaemonSet pods can't tolerate node capz-conf-x2a841-control-plane-kn8c7 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 30 01:17:44.896: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Jan 30 01:17:44.896: INFO: Node capz-conf-zghb7 is running 0 daemon pod, expected 1 Jan 30 01:17:45.845: INFO: DaemonSet pods can't tolerate node capz-conf-x2a841-control-plane-kn8c7 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 30 01:17:45.883: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Jan 30 01:17:45.883: INFO: Node capz-conf-zghb7 is running 0 daemon pod, expected 1 Jan 30 01:17:46.846: INFO: DaemonSet pods can't tolerate node capz-conf-x2a841-control-plane-kn8c7 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 30 01:17:46.884: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Jan 30 01:17:46.884: INFO: Node capz-conf-zghb7 is running 0 daemon pod, expected 1 Jan 30 01:17:47.846: INFO: DaemonSet pods can't tolerate node capz-conf-x2a841-control-plane-kn8c7 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 30 01:17:47.884: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Jan 30 01:17:47.884: INFO: Node capz-conf-zghb7 is running 0 daemon pod, expected 1 Jan 30 01:17:48.845: INFO: DaemonSet pods can't tolerate node capz-conf-x2a841-control-plane-kn8c7 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 30 01:17:48.884: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Jan 30 01:17:48.884: INFO: Node capz-conf-zghb7 is running 0 daemon pod, expected 1 Jan 30 01:17:49.846: INFO: DaemonSet pods can't tolerate node capz-conf-x2a841-control-plane-kn8c7 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 30 01:17:49.919: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Jan 30 01:17:49.920: INFO: Node capz-conf-zghb7 is running 0 daemon pod, expected 1 Jan 30 01:17:50.846: INFO: DaemonSet pods can't tolerate node capz-conf-x2a841-control-plane-kn8c7 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 30 01:17:50.883: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Jan 30 01:17:50.883: INFO: Node capz-conf-zghb7 is running 0 daemon pod, expected 1 Jan 30 01:17:51.846: INFO: DaemonSet pods can't tolerate node capz-conf-x2a841-control-plane-kn8c7 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 30 01:17:51.884: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Jan 30 01:17:51.884: INFO: Node capz-conf-zghb7 is running 0 daemon pod, expected 1 Jan 30 01:17:52.845: INFO: DaemonSet pods can't tolerate node capz-conf-x2a841-control-plane-kn8c7 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 30 01:17:52.883: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Jan 30 01:17:52.883: INFO: Node capz-conf-zghb7 is running 0 daemon pod, expected 1 Jan 30 01:17:53.845: INFO: DaemonSet pods can't tolerate node capz-conf-x2a841-control-plane-kn8c7 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 30 01:17:53.883: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Jan 30 01:17:53.883: INFO: Node capz-conf-zghb7 is running 0 daemon pod, expected 1 Jan 30 01:17:54.846: INFO: DaemonSet pods can't tolerate node capz-conf-x2a841-control-plane-kn8c7 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 30 01:17:54.846: INFO: DaemonSet pods can't tolerate node capz-conf-zghb7 with taints [{Key:node.kubernetes.io/unreachable Value: Effect:NoSchedule TimeAdded:2023-01-30 01:17:54 +0000 UTC}], skip checking this node Jan 30 01:17:54.889: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Jan 30 01:17:54.889: INFO: Number of running nodes: 1, number of available pods: 1 in daemonset daemon-set �[1mSTEP�[0m: listing all DeamonSets �[1mSTEP�[0m: DeleteCollection of the DaemonSets �[1mSTEP�[0m: Verify that ReplicaSets have been deleted [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:110 Jan 30 01:17:55.125: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"30816"},"items":null} Jan 30 01:17:55.164: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"30816"},"items":[{"metadata":{"name":"daemon-set-nxwgw","generateName":"daemon-set-","namespace":"daemonsets-1692","uid":"44bf9517-d0d5-4bfc-a60f-b364d2dfac34","resourceVersion":"30811","creationTimestamp":"2023-01-30T01:16:54Z","deletionTimestamp":"2023-01-30T01:18:25Z","deletionGracePeriodSeconds":30,"labels":{"controller-revision-hash":"6df8db488c","daemonset-name":"daemon-set","pod-template-generation":"1"},"annotations":{"cni.projectcalico.org/containerID":"87ddca880b6e80d7339a8aa0ed7ee3cf6a1763e0a352f367da8d614a25316436","cni.projectcalico.org/podIP":"192.168.121.214/32","cni.projectcalico.org/podIPs":"192.168.121.214/32"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"daemon-set","uid":"4b5f317d-641f-4fb8-a900-452b2aa530d0","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-30T01:16:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:daemonset-name":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4b5f317d-641f-4fb8-a900-452b2aa530d0\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{\"name\":\"app\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":9376,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{}}}},{"manager":"kubelet.exe","operation":"Update","apiVersion":"v1","time":"2023-01-30T01:16:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}},"subresource":"status"},{"manager":"calico.exe","operation":"Update","apiVersion":"v1","time":"2023-01-30T01:16:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}},"subresource":"status"}]},"spec":{"volumes":[{"name":"kube-api-access-pwx89","projected":{"sources":[{"serviceAccountToken":{"expirationSeconds":3607,"path":"token"}},{"configMap":{"name":"kube-root-ca.crt","items":[{"key":"ca.crt","path":"ca.crt"}]}},{"downwardAPI":{"items":[{"path":"namespace","fieldRef":{"apiVersion":"v1","fieldPath":"metadata.namespace"}}]}}],"defaultMode":420}}],"containers":[{"name":"app","image":"k8s.gcr.io/e2e-test-images/httpd:2.4.38-2","ports":[{"containerPort":9376,"protocol":"TCP"}],"resources":{},"volumeMounts":[{"name":"kube-api-access-pwx89","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent","securityContext":{}}],"restartPolicy":"Always","terminationGracePeriodSeconds":30,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"capz-conf-zghb7","securityContext":{},"affinity":{"nodeAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":{"nodeSelectorTerms":[{"matchFields":[{"key":"metadata.name","operator":"In","values":["capz-conf-zghb7"]}]}]}}},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/disk-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/memory-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/pid-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/unschedulable","operator":"Exists","effect":"NoSchedule"}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Pending","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-01-30T01:16:54Z"},{"type":"Ready","status":"False","lastProbeTime":null,"lastTransitionTime":"2023-01-30T01:16:54Z","reason":"ContainersNotReady","message":"containers with unready status: [app]"},{"type":"ContainersReady","status":"False","lastProbeTime":null,"lastTransitionTime":"2023-01-30T01:16:54Z","reason":"ContainersNotReady","message":"containers with unready status: [app]"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-01-30T01:16:54Z"}],"hostIP":"10.1.0.5","startTime":"2023-01-30T01:16:54Z","containerStatuses":[{"name":"app","state":{"waiting":{"reason":"ContainerCreating"}},"lastState":{},"ready":false,"restartCount":0,"image":"k8s.gcr.io/e2e-test-images/httpd:2.4.38-2","imageID":"","started":false}],"qosClass":"BestEffort"}},{"metadata":{"name":"daemon-set-rjvph","generateName":"daemon-set-","namespace":"daemonsets-1692","uid":"437e8558-becd-4b50-8f9c-a2c57fde4f57","resourceVersion":"30812","creationTimestamp":"2023-01-30T01:16:54Z","deletionTimestamp":"2023-01-30T01:18:25Z","deletionGracePeriodSeconds":30,"labels":{"controller-revision-hash":"6df8db488c","daemonset-name":"daemon-set","pod-template-generation":"1"},"annotations":{"cni.projectcalico.org/containerID":"66e01b815bfc0ebd062346da403d938383d432f2945d52c6b91a8dce7a3d04b4","cni.projectcalico.org/podIP":"192.168.157.8/32","cni.projectcalico.org/podIPs":"192.168.157.8/32"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"daemon-set","uid":"4b5f317d-641f-4fb8-a900-452b2aa530d0","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-30T01:16:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:daemonset-name":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4b5f317d-641f-4fb8-a900-452b2aa530d0\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{\"name\":\"app\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":9376,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{}}}},{"manager":"calico.exe","operation":"Update","apiVersion":"v1","time":"2023-01-30T01:16:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}},"subresource":"status"},{"manager":"kubelet.exe","operation":"Update","apiVersion":"v1","time":"2023-01-30T01:17:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.157.8\"}":{".":{},"f:ip":{}}},"f:startTime":{}}},"subresource":"status"}]},"spec":{"volumes":[{"name":"kube-api-access-6fbhf","projected":{"sources":[{"serviceAccountToken":{"expirationSeconds":3607,"path":"token"}},{"configMap":{"name":"kube-root-ca.crt","items":[{"key":"ca.crt","path":"ca.crt"}]}},{"downwardAPI":{"items":[{"path":"namespace","fieldRef":{"apiVersion":"v1","fieldPath":"metadata.namespace"}}]}}],"defaultMode":420}}],"containers":[{"name":"app","image":"k8s.gcr.io/e2e-test-images/httpd:2.4.38-2","ports":[{"containerPort":9376,"protocol":"TCP"}],"resources":{},"volumeMounts":[{"name":"kube-api-access-6fbhf","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent","securityContext":{}}],"restartPolicy":"Always","terminationGracePeriodSeconds":30,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"capz-conf-mcf4n","securityContext":{},"affinity":{"nodeAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":{"nodeSelectorTerms":[{"matchFields":[{"key":"metadata.name","operator":"In","values":["capz-conf-mcf4n"]}]}]}}},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/disk-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/memory-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/pid-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/unschedulable","operator":"Exists","effect":"NoSchedule"}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-01-30T01:16:54Z"},{"type":"Ready","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-01-30T01:17:40Z"},{"type":"ContainersReady","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-01-30T01:17:40Z"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-01-30T01:16:54Z"}],"hostIP":"10.1.0.4","podIP":"192.168.157.8","podIPs":[{"ip":"192.168.157.8"}],"startTime":"2023-01-30T01:16:54Z","containerStatuses":[{"name":"app","state":{"running":{"startedAt":"2023-01-30T01:17:38Z"}},"lastState":{},"ready":true,"restartCount":0,"image":"k8s.gcr.io/e2e-test-images/httpd:2.4.38-2","imageID":"k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3","containerID":"containerd://ce8892b621541a2ca7126da7c448cc65407c4d90e4a2142e90afcc89514adbde","started":true}],"qosClass":"BestEffort"}}]} Jan 30 01:17:55.206: INFO: Condition Ready of node capz-conf-zghb7 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/framework/framework.go:188 Jan 30 01:17:55.250: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready Jan 30 01:17:55.289: INFO: Condition Ready of node capz-conf-zghb7 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:17:57.330: INFO: Condition Ready of node capz-conf-zghb7 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:17:59.331: INFO: Condition Ready of node capz-conf-zghb7 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 01:18:01.331: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:18:03.330: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:18:05.330: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:18:07.332: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:18:09.330: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:18:11.329: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:18:13.331: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:18:15.332: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:18:17.331: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:18:19.330: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:18:21.334: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:18:23.331: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:18:25.331: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:18:27.332: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:18:29.332: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:18:31.330: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:18:33.330: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:18:35.330: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:18:37.331: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:18:39.330: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:18:41.332: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:18:43.330: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:18:45.330: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:18:47.331: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:18:49.330: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:18:51.332: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:18:53.330: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:18:55.331: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:18:57.331: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:18:59.330: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:19:01.330: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:19:03.334: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:19:05.330: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:19:07.331: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:19:09.331: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:19:11.332: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:19:13.331: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:19:15.330: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:19:17.338: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:19:19.330: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:19:21.330: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:19:23.329: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:19:25.335: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:19:27.334: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:19:29.325: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:19:31.326: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:19:33.324: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:19:35.324: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:19:37.326: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:19:39.324: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:19:41.325: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:19:43.324: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:19:45.324: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:19:47.325: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:19:49.325: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:19:51.324: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:19:53.333: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:19:55.324: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:19:57.325: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:19:59.325: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:20:01.324: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:20:03.324: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:20:05.324: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:20:07.324: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:20:09.325: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:20:11.330: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:20:13.324: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:20:15.325: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:20:17.325: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:20:19.326: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:20:21.327: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:20:23.326: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:20:25.325: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:20:27.325: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:20:29.325: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:20:31.327: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:20:33.324: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:20:35.324: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:20:37.325: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:20:39.324: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:20:41.326: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:20:43.326: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:20:45.324: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:20:47.324: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:20:49.325: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:20:51.325: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:20:53.334: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:20:55.324: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:20:55.359: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:20:55.359: FAIL: All nodes should be ready after test, Not ready nodes: ", capz-conf-zghb7" Full Stack Trace k8s.io/kubernetes/test/e2e.RunE2ETests(0x25761d7?) test/e2e/e2e.go:130 +0x6bb k8s.io/kubernetes/test/e2e.TestE2E(0x24e52d9?) test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc000583a00, 0x741f9a8) /usr/local/go/src/testing/testing.go:1446 +0x10b created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1493 +0x35f �[1mSTEP�[0m: Destroying namespace "daemonsets-1692" for this suite.
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-apps\]\sStatefulSet\sBasic\sStatefulSet\sfunctionality\s\[StatefulSetBasic\]\sScaling\sshould\shappen\sin\spredictable\sorder\sand\shalt\sif\sany\sstateful\spod\sis\sunhealthy\s\[Slow\]\s\[Conformance\]$'
test/e2e/framework/framework.go:188 Jan 30 01:31:44.160: All nodes should be ready after test, Not ready nodes: ", capz-conf-zghb7" vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113from junit.kubetest.01.xml
[BeforeEach] [sig-apps] StatefulSet test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 30 01:26:57.695: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename statefulset �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet test/e2e/apps/statefulset.go:96 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] test/e2e/apps/statefulset.go:111 �[1mSTEP�[0m: Creating service test in namespace statefulset-5942 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] test/e2e/framework/framework.go:652 �[1mSTEP�[0m: Initializing watcher for selector baz=blah,foo=bar �[1mSTEP�[0m: Creating stateful set ss in namespace statefulset-5942 �[1mSTEP�[0m: Waiting until all stateful set ss replicas will be running in namespace statefulset-5942 Jan 30 01:26:58.052: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false Jan 30 01:27:08.085: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true �[1mSTEP�[0m: Confirming that stateful set scale up will halt with unhealthy stateful pod Jan 30 01:27:08.116: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-5942 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 30 01:27:08.680: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Jan 30 01:27:08.680: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 30 01:27:08.680: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 30 01:27:08.711: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Jan 30 01:27:18.747: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jan 30 01:27:18.747: INFO: Waiting for statefulset status.replicas updated to 0 Jan 30 01:27:18.880: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999628s Jan 30 01:27:19.913: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.967724872s Jan 30 01:27:20.946: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.935540803s Jan 30 01:27:21.978: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.902214769s Jan 30 01:27:23.010: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.870496737s Jan 30 01:27:24.043: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.838584005s Jan 30 01:27:25.075: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.805797218s Jan 30 01:27:26.106: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.773871386s Jan 30 01:27:27.138: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.741983531s Jan 30 01:27:28.171: INFO: Verifying statefulset ss doesn't scale past 1 for another 710.160807ms �[1mSTEP�[0m: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-5942 Jan 30 01:27:29.204: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-5942 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 30 01:27:29.746: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Jan 30 01:27:29.746: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 30 01:27:29.746: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 30 01:27:29.777: INFO: Found 1 stateful pods, waiting for 3 Jan 30 01:27:39.811: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jan 30 01:27:39.811: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jan 30 01:27:39.811: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false Jan 30 01:27:49.810: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jan 30 01:27:49.810: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jan 30 01:27:49.810: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true �[1mSTEP�[0m: Verifying that stateful set ss was scaled up in order �[1mSTEP�[0m: Scale down will halt with unhealthy stateful pod Jan 30 01:27:49.875: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-5942 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 30 01:27:50.387: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Jan 30 01:27:50.387: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 30 01:27:50.387: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 30 01:27:50.387: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-5942 exec ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 30 01:27:50.892: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Jan 30 01:27:50.892: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 30 01:27:50.892: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 30 01:27:50.892: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-5942 exec ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 30 01:27:51.404: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Jan 30 01:27:51.404: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 30 01:27:51.404: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 30 01:27:51.404: INFO: Waiting for statefulset status.replicas updated to 0 Jan 30 01:27:51.435: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 Jan 30 01:28:01.556: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jan 30 01:28:01.556: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Jan 30 01:28:01.556: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Jan 30 01:28:01.711: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999291s Jan 30 01:28:02.744: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.935073834s Jan 30 01:28:03.777: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.902673409s Jan 30 01:28:04.811: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.869208082s Jan 30 01:28:05.844: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.835761451s Jan 30 01:28:06.883: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.803244076s Jan 30 01:28:07.921: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.764627309s Jan 30 01:28:08.962: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.725808421s Jan 30 01:28:10.001: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.68442679s Jan 30 01:28:11.039: INFO: Verifying statefulset ss doesn't scale past 3 for another 645.812332ms �[1mSTEP�[0m: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-5942 Jan 30 01:28:12.078: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-5942 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 30 01:28:12.630: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Jan 30 01:28:12.630: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 30 01:28:12.630: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 30 01:28:12.630: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-5942 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 30 01:28:13.099: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Jan 30 01:28:13.099: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 30 01:28:13.099: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 30 01:28:13.099: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-5942 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 30 01:28:13.599: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Jan 30 01:28:13.599: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 30 01:28:13.599: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 30 01:28:13.599: INFO: Scaling statefulset ss to 0 �[1mSTEP�[0m: Verifying that stateful set ss was scaled down in reverse order [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] test/e2e/apps/statefulset.go:122 Jan 30 01:28:43.749: INFO: Deleting all statefulset in ns statefulset-5942 Jan 30 01:28:43.788: INFO: Scaling statefulset ss to 0 Jan 30 01:28:43.899: INFO: Waiting for statefulset status.replicas updated to 0 Jan 30 01:28:43.936: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet test/e2e/framework/framework.go:188 Jan 30 01:28:44.051: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready Jan 30 01:28:44.090: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:28:46.135: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:28:48.131: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:28:50.130: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:28:52.132: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:28:54.131: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:28:56.130: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:28:58.133: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:29:00.131: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:29:02.134: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:29:04.131: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:29:06.131: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:29:08.132: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:29:10.130: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:29:12.131: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:29:14.131: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:29:16.130: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:29:18.131: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:29:20.130: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:29:22.134: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:29:24.130: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:29:26.136: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:29:28.132: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:29:30.130: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:29:32.131: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:29:34.130: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:29:36.131: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:29:38.133: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:29:40.130: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:29:42.131: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:29:44.130: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:29:46.131: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:29:48.144: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:29:50.131: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:29:52.125: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:29:54.125: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:29:56.125: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:29:58.126: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:30:00.125: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:30:02.125: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:30:04.127: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:30:06.125: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:30:08.125: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:30:10.125: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:30:12.124: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:30:14.125: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:30:16.125: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:30:18.124: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:30:20.125: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:30:22.125: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:30:24.125: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:30:26.125: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:30:28.127: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:30:30.126: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:30:32.124: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:30:34.126: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:30:36.124: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:30:38.127: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:30:40.125: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:30:42.126: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:30:44.126: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:30:46.125: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:30:48.124: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:30:50.130: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:30:52.125: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:30:54.133: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:30:56.126: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:30:58.126: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:31:00.126: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:31:02.125: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:31:04.125: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:31:06.126: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:31:08.126: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:31:10.125: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:31:12.126: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:31:14.127: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:31:16.126: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:31:18.125: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:31:20.125: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:31:22.126: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:31:24.125: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:31:26.125: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:31:28.126: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:31:30.125: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:31:32.125: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:31:34.126: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:31:36.124: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:31:38.125: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:31:40.124: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:31:42.125: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:31:44.126: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:31:44.160: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:31:44.160: FAIL: All nodes should be ready after test, Not ready nodes: ", capz-conf-zghb7" Full Stack Trace k8s.io/kubernetes/test/e2e.RunE2ETests(0x25761d7?) test/e2e/e2e.go:130 +0x6bb k8s.io/kubernetes/test/e2e.TestE2E(0x24e52d9?) test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc000583a00, 0x741f9a8) /usr/local/go/src/testing/testing.go:1446 +0x10b created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1493 +0x35f �[1mSTEP�[0m: Destroying namespace "statefulset-5942" for this suite.
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-autoscaling\]\s\[Feature\:HPA\]\sHorizontal\spod\sautoscaling\s\(scale\sresource\:\sCPU\)\s\[Serial\]\s\[Slow\]\sReplicaSet\sShould\sscale\sfrom\s1\spod\sto\s3\spods\sand\sfrom\s3\sto\s5$'
test/e2e/framework/framework.go:188 Jan 30 01:37:46.712: All nodes should be ready after test, Not ready nodes: ", capz-conf-zghb7" vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113from junit.kubetest.01.xml
[BeforeEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 30 01:31:44.197: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename horizontal-pod-autoscaling �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] Should scale from 1 pod to 3 pods and from 3 to 5 test/e2e/autoscaling/horizontal_pod_autoscaling.go:50 �[1mSTEP�[0m: Running consuming RC rs via apps/v1beta2, Kind=ReplicaSet with 1 replicas �[1mSTEP�[0m: creating replicaset rs in namespace horizontal-pod-autoscaling-9280 �[1mSTEP�[0m: creating replicaset rs in namespace horizontal-pod-autoscaling-9280 �[1mSTEP�[0m: Running controller �[1mSTEP�[0m: creating replication controller rs-ctrl in namespace horizontal-pod-autoscaling-9280 Jan 30 01:32:09.681: INFO: Waiting for amount of service:rs-ctrl endpoints to be 1 Jan 30 01:32:09.713: INFO: RC rs: consume 250 millicores in total Jan 30 01:32:09.713: INFO: RC rs: sending request to consume 0 millicores Jan 30 01:32:09.713: INFO: ConsumeCPU URL: {https capz-conf-x2a841-ef8a7cb.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-9280/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=0&requestSizeMillicores=100 } Jan 30 01:32:09.748: INFO: RC rs: setting consumption to 250 millicores in total Jan 30 01:32:09.748: INFO: RC rs: consume 0 MB in total Jan 30 01:32:09.748: INFO: RC rs: setting consumption to 0 MB in total Jan 30 01:32:09.748: INFO: RC rs: sending request to consume 0 MB Jan 30 01:32:09.748: INFO: ConsumeMem URL: {https capz-conf-x2a841-ef8a7cb.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-9280/services/rs-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 30 01:32:09.748: INFO: RC rs: consume custom metric 0 in total Jan 30 01:32:09.748: INFO: RC rs: setting bump of metric QPS to 0 in total Jan 30 01:32:09.748: INFO: RC rs: sending request to consume 0 of custom metric QPS Jan 30 01:32:09.748: INFO: ConsumeCustomMetric URL: {https capz-conf-x2a841-ef8a7cb.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-9280/services/rs-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 30 01:32:09.814: INFO: waiting for 3 replicas (current: 1) Jan 30 01:32:29.847: INFO: waiting for 3 replicas (current: 1) Jan 30 01:32:39.748: INFO: RC rs: sending request to consume 250 millicores Jan 30 01:32:39.748: INFO: ConsumeCPU URL: {https capz-conf-x2a841-ef8a7cb.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-9280/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Jan 30 01:32:39.785: INFO: RC rs: sending request to consume 0 of custom metric QPS Jan 30 01:32:39.785: INFO: RC rs: sending request to consume 0 MB Jan 30 01:32:39.785: INFO: ConsumeCustomMetric URL: {https capz-conf-x2a841-ef8a7cb.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-9280/services/rs-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 30 01:32:39.785: INFO: ConsumeMem URL: {https capz-conf-x2a841-ef8a7cb.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-9280/services/rs-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 30 01:32:49.846: INFO: waiting for 3 replicas (current: 1) Jan 30 01:33:09.846: INFO: waiting for 3 replicas (current: 1) Jan 30 01:33:12.808: INFO: RC rs: sending request to consume 0 of custom metric QPS Jan 30 01:33:12.808: INFO: RC rs: sending request to consume 0 MB Jan 30 01:33:12.808: INFO: ConsumeCustomMetric URL: {https capz-conf-x2a841-ef8a7cb.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-9280/services/rs-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 30 01:33:12.808: INFO: RC rs: sending request to consume 250 millicores Jan 30 01:33:12.808: INFO: ConsumeMem URL: {https capz-conf-x2a841-ef8a7cb.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-9280/services/rs-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 30 01:33:12.808: INFO: ConsumeCPU URL: {https capz-conf-x2a841-ef8a7cb.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-9280/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=250&requestSizeMillicores=100 } Jan 30 01:33:29.846: INFO: waiting for 3 replicas (current: 3) Jan 30 01:33:29.846: INFO: RC rs: consume 700 millicores in total Jan 30 01:33:29.846: INFO: RC rs: setting consumption to 700 millicores in total Jan 30 01:33:29.877: INFO: waiting for 5 replicas (current: 3) Jan 30 01:33:42.848: INFO: RC rs: sending request to consume 0 MB Jan 30 01:33:42.848: INFO: ConsumeMem URL: {https capz-conf-x2a841-ef8a7cb.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-9280/services/rs-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 30 01:33:42.871: INFO: RC rs: sending request to consume 0 of custom metric QPS Jan 30 01:33:42.871: INFO: RC rs: sending request to consume 700 millicores Jan 30 01:33:42.871: INFO: ConsumeCustomMetric URL: {https capz-conf-x2a841-ef8a7cb.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-9280/services/rs-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 30 01:33:42.871: INFO: ConsumeCPU URL: {https capz-conf-x2a841-ef8a7cb.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-9280/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=700&requestSizeMillicores=100 } Jan 30 01:33:49.909: INFO: waiting for 5 replicas (current: 3) Jan 30 01:34:09.909: INFO: waiting for 5 replicas (current: 3) Jan 30 01:34:12.883: INFO: RC rs: sending request to consume 0 MB Jan 30 01:34:12.883: INFO: ConsumeMem URL: {https capz-conf-x2a841-ef8a7cb.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-9280/services/rs-ctrl/proxy/ConsumeMem false false durationSec=30&megabytes=0&requestSizeMegabytes=100 } Jan 30 01:34:12.906: INFO: RC rs: sending request to consume 0 of custom metric QPS Jan 30 01:34:12.906: INFO: ConsumeCustomMetric URL: {https capz-conf-x2a841-ef8a7cb.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-9280/services/rs-ctrl/proxy/BumpMetric false false delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10 } Jan 30 01:34:15.918: INFO: RC rs: sending request to consume 700 millicores Jan 30 01:34:15.918: INFO: ConsumeCPU URL: {https capz-conf-x2a841-ef8a7cb.eastus.cloudapp.azure.com:6443 /api/v1/namespaces/horizontal-pod-autoscaling-9280/services/rs-ctrl/proxy/ConsumeCPU false false durationSec=30&millicores=700&requestSizeMillicores=100 } Jan 30 01:34:29.910: INFO: waiting for 5 replicas (current: 5) �[1mSTEP�[0m: Removing consuming RC rs Jan 30 01:34:29.945: INFO: RC rs: stopping metric consumer Jan 30 01:34:29.945: INFO: RC rs: stopping CPU consumer Jan 30 01:34:29.945: INFO: RC rs: stopping mem consumer �[1mSTEP�[0m: deleting ReplicaSet.apps rs in namespace horizontal-pod-autoscaling-9280, will wait for the garbage collector to delete the pods Jan 30 01:34:40.066: INFO: Deleting ReplicaSet.apps rs took: 37.731669ms Jan 30 01:34:40.167: INFO: Terminating ReplicaSet.apps rs pods took: 101.096704ms �[1mSTEP�[0m: deleting ReplicationController rs-ctrl in namespace horizontal-pod-autoscaling-9280, will wait for the garbage collector to delete the pods Jan 30 01:34:44.340: INFO: Deleting ReplicationController rs-ctrl took: 34.56996ms Jan 30 01:34:44.441: INFO: Terminating ReplicationController rs-ctrl pods took: 100.852612ms [AfterEach] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) test/e2e/framework/framework.go:188 Jan 30 01:34:46.608: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready Jan 30 01:34:46.643: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:34:48.678: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:34:50.677: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:34:52.677: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:34:54.677: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:34:56.678: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:34:58.677: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:35:00.677: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:35:02.677: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:35:04.677: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:35:06.677: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:35:08.678: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:35:10.678: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:35:12.678: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:35:14.677: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:35:16.677: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:35:18.678: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:35:20.677: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:35:22.678: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:35:24.677: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:35:26.678: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:35:28.677: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:35:30.677: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:35:32.678: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:35:34.677: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:35:36.677: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:35:38.678: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:35:40.677: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:35:42.677: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:35:44.678: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:35:46.678: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:35:48.677: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:35:50.677: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:35:52.678: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:35:54.677: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:35:56.678: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:35:58.677: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:36:00.678: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:36:02.677: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:36:04.678: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:36:06.677: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:36:08.677: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:36:10.678: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:36:12.678: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:36:14.678: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:36:16.677: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:36:18.678: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:36:20.680: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:36:22.676: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:36:24.677: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:36:26.678: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:36:28.678: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:36:30.678: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:36:32.679: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:36:34.679: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:36:36.677: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:36:38.677: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:36:40.678: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:36:42.679: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:36:44.682: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:36:46.678: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:36:48.678: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:36:50.678: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:36:52.679: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:36:54.677: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:36:56.678: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:36:58.677: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:37:00.678: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:37:02.679: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:37:04.677: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:37:06.677: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:37:08.680: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:37:10.678: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:37:12.678: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:37:14.678: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:37:16.677: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:37:18.678: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:37:20.678: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:37:22.679: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:37:24.678: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:37:26.677: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:37:28.677: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:37:30.678: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:37:32.678: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:37:34.676: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:37:36.678: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:37:38.678: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:37:40.678: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:37:42.677: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:37:44.677: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:37:46.678: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:37:46.712: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 01:37:46.712: FAIL: All nodes should be ready after test, Not ready nodes: ", capz-conf-zghb7" Full Stack Trace k8s.io/kubernetes/test/e2e.RunE2ETests(0x25761d7?) test/e2e/e2e.go:130 +0x6bb k8s.io/kubernetes/test/e2e.TestE2E(0x24e52d9?) test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc000583a00, 0x741f9a8) /usr/local/go/src/testing/testing.go:1446 +0x10b created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1493 +0x35f �[1mSTEP�[0m: Destroying namespace "horizontal-pod-autoscaling-9280" for this suite.
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-node\]\sPods\sshould\scap\sback\-off\sat\sMaxContainerBackOff\s\[Slow\]\[NodeConformance\]$'
test/e2e/framework/framework.go:188 Jan 30 02:07:56.072: All nodes should be ready after test, Not ready nodes: ", capz-conf-zghb7" vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113from junit.kubetest.01.xml
[BeforeEach] [sig-node] Pods test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 30 01:37:46.748: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename pods �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] Pods test/e2e/common/node/pods.go:191 [It] should cap back-off at MaxContainerBackOff [Slow][NodeConformance] test/e2e/common/node/pods.go:723 Jan 30 01:37:47.042: INFO: The status of Pod back-off-cap is Pending, waiting for it to be Running (with Ready = true) Jan 30 01:37:49.074: INFO: The status of Pod back-off-cap is Pending, waiting for it to be Running (with Ready = true) Jan 30 01:37:51.073: INFO: The status of Pod back-off-cap is Running (Ready = true) �[1mSTEP�[0m: getting restart delay when capped Jan 30 01:49:16.768: INFO: getRestartDelay: restartCount = 7, finishedAt=2023-01-30 01:44:13 +0000 UTC restartedAt=2023-01-30 01:49:15 +0000 UTC (5m2s) Jan 30 01:54:30.524: INFO: Container's last state is not "Terminated". Jan 30 01:54:31.556: INFO: Container's last state is not "Terminated". Jan 30 01:54:32.588: INFO: Container's last state is not "Terminated". Jan 30 01:54:33.622: INFO: Container's last state is not "Terminated". Jan 30 01:54:34.654: INFO: Container's last state is not "Terminated". Jan 30 01:54:35.686: INFO: Container's last state is not "Terminated". Jan 30 01:54:36.718: INFO: Container's last state is not "Terminated". Jan 30 01:54:37.750: INFO: Container's last state is not "Terminated". Jan 30 01:54:38.782: INFO: Container's last state is not "Terminated". Jan 30 01:54:39.814: INFO: Container's last state is not "Terminated". Jan 30 01:54:40.847: INFO: Container's last state is not "Terminated". Jan 30 01:54:41.878: INFO: Container's last state is not "Terminated". Jan 30 01:54:42.910: INFO: Container's last state is not "Terminated". Jan 30 01:54:43.942: INFO: Container's last state is not "Terminated". Jan 30 01:54:44.976: INFO: Container's last state is not "Terminated". Jan 30 01:54:46.010: INFO: Container's last state is not "Terminated". Jan 30 01:54:47.041: INFO: Container's last state is not "Terminated". Jan 30 01:54:48.073: INFO: Container's last state is not "Terminated". Jan 30 01:54:49.105: INFO: Container's last state is not "Terminated". Jan 30 01:54:50.136: INFO: Container's last state is not "Terminated". Jan 30 01:54:51.168: INFO: getRestartDelay: restartCount = 8, finishedAt=2023-01-30 01:49:20 +0000 UTC restartedAt=2023-01-30 01:54:29 +0000 UTC (5m9s) Jan 30 01:59:46.342: INFO: getRestartDelay: restartCount = 9, finishedAt=2023-01-30 01:54:34 +0000 UTC restartedAt=2023-01-30 01:59:45 +0000 UTC (5m11s) �[1mSTEP�[0m: getting restart delay after a capped delay Jan 30 02:04:55.966: INFO: getRestartDelay: restartCount = 10, finishedAt=2023-01-30 01:59:50 +0000 UTC restartedAt=2023-01-30 02:04:54 +0000 UTC (5m4s) [AfterEach] [sig-node] Pods test/e2e/framework/framework.go:188 Jan 30 02:04:55.966: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready Jan 30 02:04:55.999: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:04:58.035: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:05:00.034: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:05:02.035: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:05:04.039: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:05:06.034: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:05:08.035: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:05:10.034: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:05:12.036: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:05:14.035: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:05:16.034: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:05:18.035: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:05:20.033: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:05:22.033: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:05:24.036: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:05:26.034: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:05:28.036: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:05:30.034: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:05:32.035: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:05:34.034: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:05:36.034: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:05:38.037: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:05:40.035: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:05:42.034: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:05:44.034: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:05:46.034: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:05:48.035: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:05:50.034: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:05:52.033: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:05:54.035: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:05:56.035: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:05:58.034: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:06:00.033: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:06:02.034: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:06:04.035: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:06:06.035: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:06:08.035: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:06:10.036: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:06:12.035: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:06:14.034: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:06:16.034: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:06:18.033: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:06:20.035: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:06:22.035: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:06:24.034: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:06:26.033: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:06:28.033: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:06:30.034: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:06:32.034: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:06:34.033: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:06:36.033: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:06:38.036: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:06:40.034: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:06:42.035: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:06:44.033: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:06:46.034: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:06:48.033: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:06:50.034: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:06:52.033: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:06:54.042: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:06:56.036: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:06:58.035: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:07:00.035: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:07:02.034: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:07:04.037: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:07:06.034: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:07:08.034: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:07:10.034: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:07:12.034: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:07:14.034: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:07:16.034: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:07:18.034: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:07:20.034: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:07:22.034: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:07:24.035: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:07:26.034: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:07:28.034: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:07:30.034: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:07:32.034: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:07:34.035: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:07:36.034: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:07:38.035: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:07:40.033: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:07:42.035: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:07:44.035: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:07:46.034: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:07:48.033: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:07:50.033: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:07:52.034: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:07:54.035: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:07:56.038: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:07:56.072: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:07:56.072: FAIL: All nodes should be ready after test, Not ready nodes: ", capz-conf-zghb7" Full Stack Trace k8s.io/kubernetes/test/e2e.RunE2ETests(0x25761d7?) test/e2e/e2e.go:130 +0x6bb k8s.io/kubernetes/test/e2e.TestE2E(0x24e52d9?) test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc000583a00, 0x741f9a8) /usr/local/go/src/testing/testing.go:1446 +0x10b created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1493 +0x35f �[1mSTEP�[0m: Destroying namespace "pods-5794" for this suite.
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-node\]\sVariable\sExpansion\sshould\sverify\sthat\sa\sfailing\ssubpath\sexpansion\scan\sbe\smodified\sduring\sthe\slifecycle\sof\sa\scontainer\s\[Slow\]\s\[Conformance\]$'
test/e2e/framework/framework.go:188 Jan 30 02:20:13.559: All nodes should be ready after test, Not ready nodes: ", capz-conf-zghb7" vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113from junit.kubetest.01.xml
[BeforeEach] [sig-node] Variable Expansion test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 30 02:14:54.364: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename var-expansion �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance] test/e2e/framework/framework.go:652 �[1mSTEP�[0m: creating the pod with failed condition �[1mSTEP�[0m: updating the pod Jan 30 02:16:55.287: INFO: Successfully updated pod "var-expansion-30bf7cc4-c3ad-4b46-aecd-6d60099b881e" �[1mSTEP�[0m: waiting for pod running �[1mSTEP�[0m: deleting the pod gracefully Jan 30 02:17:07.352: INFO: Deleting pod "var-expansion-30bf7cc4-c3ad-4b46-aecd-6d60099b881e" in namespace "var-expansion-5884" Jan 30 02:17:07.395: INFO: Wait up to 5m0s for pod "var-expansion-30bf7cc4-c3ad-4b46-aecd-6d60099b881e" to be fully deleted [AfterEach] [sig-node] Variable Expansion test/e2e/framework/framework.go:188 Jan 30 02:17:13.458: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready Jan 30 02:17:13.490: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:17:15.523: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:17:17.525: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:17:19.524: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:17:21.523: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:17:23.523: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:17:25.526: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:17:27.523: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:17:29.525: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:17:31.524: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:17:33.523: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:17:35.523: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:17:37.523: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:17:39.523: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:17:41.528: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:17:43.523: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:17:45.523: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:17:47.524: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:17:49.523: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:17:51.524: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:17:53.523: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:17:55.524: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:17:57.524: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:17:59.524: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:18:01.524: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:18:03.523: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:18:05.523: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:18:07.524: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:18:09.524: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:18:11.523: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:18:13.523: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:18:15.523: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:18:17.524: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:18:19.525: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:18:21.523: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:18:23.522: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:18:25.523: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:18:27.523: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:18:29.524: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:18:31.524: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:18:33.524: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:18:35.523: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:18:37.524: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:18:39.523: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:18:41.530: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:18:43.523: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:18:45.523: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:18:47.524: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:18:49.523: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:18:51.524: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:18:53.524: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:18:55.523: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:18:57.524: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:18:59.523: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:19:01.523: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:19:03.523: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:19:05.523: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:19:07.524: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:19:09.524: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:19:11.525: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:19:13.524: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:19:15.523: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:19:17.523: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:19:19.528: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:19:21.524: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:19:23.527: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:19:25.524: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:19:27.525: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:19:29.524: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:19:31.523: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:19:33.524: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:19:35.523: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:19:37.524: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:19:39.523: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:19:41.528: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:19:43.523: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:19:45.526: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:19:47.524: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:19:49.524: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:19:51.524: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:19:53.524: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:19:55.524: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:19:57.524: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:19:59.524: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:20:01.523: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:20:03.523: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:20:05.525: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:20:07.526: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:20:09.526: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:20:11.524: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:20:13.524: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:20:13.558: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:20:13.559: FAIL: All nodes should be ready after test, Not ready nodes: ", capz-conf-zghb7" Full Stack Trace k8s.io/kubernetes/test/e2e.RunE2ETests(0x25761d7?) test/e2e/e2e.go:130 +0x6bb k8s.io/kubernetes/test/e2e.TestE2E(0x24e52d9?) test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc000583a00, 0x741f9a8) /usr/local/go/src/testing/testing.go:1446 +0x10b created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1493 +0x35f �[1mSTEP�[0m: Destroying namespace "var-expansion-5884" for this suite.
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-scheduling\]\sSchedulerPredicates\s\[Serial\]\svalidates\sresource\slimits\sof\spods\sthat\sare\sallowed\sto\srun\s\s\[Conformance\]$'
test/e2e/framework/framework.go:188 Jan 30 02:27:34.984: All nodes should be ready after test, Not ready nodes: ", capz-conf-zghb7" vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113from junit.kubetest.01.xml
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 30 02:23:26.940: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename sched-pred �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:92 Jan 30 02:23:27.163: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jan 30 02:23:27.196: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:23:29.230: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:23:31.230: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:23:33.229: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:23:35.230: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:23:37.229: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:23:39.231: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:23:41.230: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:23:43.231: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:23:45.230: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:23:47.229: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:23:49.229: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:23:51.230: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:23:53.229: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:23:55.232: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:23:57.230: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:23:59.229: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:24:01.231: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:24:03.231: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:24:05.230: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:24:07.229: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:24:09.230: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:24:11.233: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:24:13.231: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:24:15.230: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:24:17.230: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:24:19.231: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:24:21.229: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:24:23.230: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:24:25.230: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:24:27.230: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:24:27.262: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:24:27.294: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:24:27.294: INFO: Waiting for terminating namespaces to be deleted... Jan 30 02:24:27.325: INFO: Logging pods the apiserver thinks is on node capz-conf-mcf4n before test Jan 30 02:24:27.365: INFO: calico-node-windows-ksw8l from calico-system started at 2023-01-29 23:19:20 +0000 UTC (2 container statuses recorded) Jan 30 02:24:27.365: INFO: Container calico-node-felix ready: true, restart count 1 Jan 30 02:24:27.365: INFO: Container calico-node-startup ready: true, restart count 0 Jan 30 02:24:27.365: INFO: containerd-logger-rq5j7 from kube-system started at 2023-01-29 23:19:20 +0000 UTC (1 container statuses recorded) Jan 30 02:24:27.365: INFO: Container containerd-logger ready: true, restart count 0 Jan 30 02:24:27.365: INFO: csi-azuredisk-node-win-cx8lf from kube-system started at 2023-01-29 23:20:11 +0000 UTC (3 container statuses recorded) Jan 30 02:24:27.365: INFO: Container azuredisk ready: true, restart count 0 Jan 30 02:24:27.365: INFO: Container liveness-probe ready: true, restart count 0 Jan 30 02:24:27.365: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 30 02:24:27.365: INFO: csi-proxy-xg5g6 from kube-system started at 2023-01-29 23:20:11 +0000 UTC (1 container statuses recorded) Jan 30 02:24:27.365: INFO: Container csi-proxy ready: true, restart count 0 Jan 30 02:24:27.365: INFO: kube-proxy-windows-gvpjs from kube-system started at 2023-01-29 23:19:20 +0000 UTC (1 container statuses recorded) Jan 30 02:24:27.365: INFO: Container kube-proxy ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] test/e2e/framework/framework.go:652 �[1mSTEP�[0m: verifying the node has the label node capz-conf-mcf4n Jan 30 02:24:27.538: INFO: Pod calico-node-windows-ksw8l requesting resource cpu=0m on Node capz-conf-mcf4n Jan 30 02:24:27.538: INFO: Pod containerd-logger-rq5j7 requesting resource cpu=0m on Node capz-conf-mcf4n Jan 30 02:24:27.538: INFO: Pod csi-azuredisk-node-win-cx8lf requesting resource cpu=0m on Node capz-conf-mcf4n Jan 30 02:24:27.538: INFO: Pod csi-proxy-xg5g6 requesting resource cpu=0m on Node capz-conf-mcf4n Jan 30 02:24:27.538: INFO: Pod kube-proxy-windows-gvpjs requesting resource cpu=0m on Node capz-conf-mcf4n �[1mSTEP�[0m: Starting Pods to consume most of the cluster CPU. Jan 30 02:24:27.538: INFO: Creating a pod which consumes cpu=2800m on Node capz-conf-mcf4n �[1mSTEP�[0m: Creating another pod that requires unavailable amount of CPU. �[1mSTEP�[0m: Considering event: Type = [Normal], Name = [filler-pod-82b7bda2-e69e-4012-b01b-0f39fa645683.173ef4fa5d2b23e1], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1797/filler-pod-82b7bda2-e69e-4012-b01b-0f39fa645683 to capz-conf-mcf4n] �[1mSTEP�[0m: Considering event: Type = [Normal], Name = [filler-pod-82b7bda2-e69e-4012-b01b-0f39fa645683.173ef4fae9e21e0c], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.7" already present on machine] �[1mSTEP�[0m: Considering event: Type = [Normal], Name = [filler-pod-82b7bda2-e69e-4012-b01b-0f39fa645683.173ef4faf2c3a900], Reason = [Created], Message = [Created container filler-pod-82b7bda2-e69e-4012-b01b-0f39fa645683] �[1mSTEP�[0m: Considering event: Type = [Normal], Name = [filler-pod-82b7bda2-e69e-4012-b01b-0f39fa645683.173ef4fb4502cf5c], Reason = [Started], Message = [Started container filler-pod-82b7bda2-e69e-4012-b01b-0f39fa645683] �[1mSTEP�[0m: Considering event: Type = [Warning], Name = [additional-pod.173ef4fbcc76cefa], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 Insufficient cpu, 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }, 1 node(s) had untolerated taint {node.kubernetes.io/unreachable: }. preemption: 0/3 nodes are available: 1 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.] �[1mSTEP�[0m: removing the label node off the node capz-conf-mcf4n �[1mSTEP�[0m: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/framework.go:188 Jan 30 02:24:34.886: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready Jan 30 02:24:34.919: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:24:36.953: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:24:38.952: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:24:40.953: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:24:42.954: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:24:44.952: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:24:46.953: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:24:48.952: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:24:50.953: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:24:52.953: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:24:54.952: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:24:56.952: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:24:58.952: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:25:00.953: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:25:02.952: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:25:04.952: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:25:06.952: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:25:08.952: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:25:10.953: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:25:12.952: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:25:14.952: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:25:16.951: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:25:18.952: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:25:20.952: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:25:22.952: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:25:24.953: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:25:26.954: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:25:28.953: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:25:30.953: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:25:32.953: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:25:34.952: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:25:36.953: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:25:38.955: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:25:40.952: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:25:42.952: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:25:44.952: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:25:46.952: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:25:48.952: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:25:50.952: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:25:52.954: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:25:54.951: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:25:56.954: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:25:58.953: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:26:00.952: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:26:02.953: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:26:04.953: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:26:06.953: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:26:08.953: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:26:10.953: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:26:12.953: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:26:14.952: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:26:16.952: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:26:18.953: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:26:20.952: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:26:22.953: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:26:24.952: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:26:26.952: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:26:28.952: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:26:30.953: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:26:32.953: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:26:34.954: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:26:36.952: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:26:38.952: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:26:40.952: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:26:42.952: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:26:44.953: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:26:46.953: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:26:48.953: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:26:50.954: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:26:52.953: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:26:54.952: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:26:56.952: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:26:58.953: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:27:00.953: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:27:02.953: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:27:04.953: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:27:06.955: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:27:08.953: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:27:10.953: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:27:12.953: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:27:14.952: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:27:16.953: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:27:18.953: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:27:20.953: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:27:22.954: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:27:24.952: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:27:26.956: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:27:28.952: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:27:30.952: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:27:32.952: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:27:34.952: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:27:34.984: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:27:34.984: FAIL: All nodes should be ready after test, Not ready nodes: ", capz-conf-zghb7" Full Stack Trace k8s.io/kubernetes/test/e2e.RunE2ETests(0x25761d7?) test/e2e/e2e.go:130 +0x6bb k8s.io/kubernetes/test/e2e.TestE2E(0x24e52d9?) test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc000583a00, 0x741f9a8) /usr/local/go/src/testing/testing.go:1446 +0x10b created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1493 +0x35f �[1mSTEP�[0m: Destroying namespace "sched-pred-1797" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:83
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-windows\]\s\[Feature\:Windows\]\sCpu\sResources\s\[Serial\]\sContainer\slimits\sshould\snot\sbe\sexceeded\safter\swaiting\s2\sminutes$'
test/e2e/framework/framework.go:188 Jan 30 02:36:14.001: All nodes should be ready after test, Not ready nodes: ", capz-conf-zghb7" vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113from junit.kubetest.01.xml
[BeforeEach] [sig-windows] [Feature:Windows] Cpu Resources [Serial] test/e2e/windows/framework.go:28 [BeforeEach] [sig-windows] [Feature:Windows] Cpu Resources [Serial] test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 30 02:31:01.050: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename cpu-resources-test-windows �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should not be exceeded after waiting 2 minutes test/e2e/windows/cpu_limits.go:43 �[1mSTEP�[0m: Creating one pod with limit set to '0.5' Jan 30 02:31:01.338: INFO: The status of Pod cpulimittest-07545be8-0aee-4afc-9c57-47c38b2afec1 is Pending, waiting for it to be Running (with Ready = true) Jan 30 02:31:03.370: INFO: The status of Pod cpulimittest-07545be8-0aee-4afc-9c57-47c38b2afec1 is Pending, waiting for it to be Running (with Ready = true) Jan 30 02:31:05.370: INFO: The status of Pod cpulimittest-07545be8-0aee-4afc-9c57-47c38b2afec1 is Pending, waiting for it to be Running (with Ready = true) Jan 30 02:31:07.370: INFO: The status of Pod cpulimittest-07545be8-0aee-4afc-9c57-47c38b2afec1 is Running (Ready = true) �[1mSTEP�[0m: Creating one pod with limit set to '500m' Jan 30 02:31:07.469: INFO: The status of Pod cpulimittest-6e59ee53-0b90-4b36-8791-3f45954266a4 is Pending, waiting for it to be Running (with Ready = true) Jan 30 02:31:09.502: INFO: The status of Pod cpulimittest-6e59ee53-0b90-4b36-8791-3f45954266a4 is Pending, waiting for it to be Running (with Ready = true) Jan 30 02:31:11.503: INFO: The status of Pod cpulimittest-6e59ee53-0b90-4b36-8791-3f45954266a4 is Pending, waiting for it to be Running (with Ready = true) Jan 30 02:31:13.502: INFO: The status of Pod cpulimittest-6e59ee53-0b90-4b36-8791-3f45954266a4 is Running (Ready = true) �[1mSTEP�[0m: Waiting 2 minutes �[1mSTEP�[0m: Ensuring pods are still running �[1mSTEP�[0m: Ensuring cpu doesn't exceed limit by >5% �[1mSTEP�[0m: Gathering node summary stats Jan 30 02:33:13.815: INFO: Pod cpulimittest-07545be8-0aee-4afc-9c57-47c38b2afec1 usage: 0.492468145 �[1mSTEP�[0m: Gathering node summary stats Jan 30 02:33:13.904: INFO: Pod cpulimittest-6e59ee53-0b90-4b36-8791-3f45954266a4 usage: 0.500533452 [AfterEach] [sig-windows] [Feature:Windows] Cpu Resources [Serial] test/e2e/framework/framework.go:188 Jan 30 02:33:13.904: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready Jan 30 02:33:13.936: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:33:15.969: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:33:17.970: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:33:19.970: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:33:21.970: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:33:23.969: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:33:25.970: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:33:27.970: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:33:29.969: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:33:31.970: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:33:33.970: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:33:35.969: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:33:37.969: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:33:39.970: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:33:41.970: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:33:43.970: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:33:45.970: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:33:47.969: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:33:49.969: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:33:51.971: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:33:53.970: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:33:55.969: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:33:57.969: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:33:59.971: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:34:01.969: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:34:03.970: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:34:05.969: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:34:07.970: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:34:09.970: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:34:11.971: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:34:13.969: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:34:15.969: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:34:17.970: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:34:19.970: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:34:21.970: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:34:23.970: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:34:25.969: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:34:27.970: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:34:29.970: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:34:31.970: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:34:33.969: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:34:35.969: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:34:37.969: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:34:39.970: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:34:41.970: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:34:43.970: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:34:45.970: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:34:47.968: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:34:49.970: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:34:51.970: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:34:53.970: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:34:55.969: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:34:57.968: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:34:59.970: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:35:01.969: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:35:03.970: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:35:05.970: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:35:07.971: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:35:09.969: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:35:11.970: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:35:13.970: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:35:15.971: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:35:17.970: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:35:19.969: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:35:21.970: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:35:23.970: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:35:25.969: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:35:27.969: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:35:29.969: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:35:31.969: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:35:33.971: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:35:35.970: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:35:37.970: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:35:39.970: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:35:41.970: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:35:43.970: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:35:45.969: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:35:47.971: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:35:49.969: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:35:51.970: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:35:53.969: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:35:55.969: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:35:57.972: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:35:59.970: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:36:01.970: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:36:03.970: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:36:05.970: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:36:07.971: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:36:09.970: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:36:11.972: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:36:13.969: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:36:14.001: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:36:14.001: FAIL: All nodes should be ready after test, Not ready nodes: ", capz-conf-zghb7" Full Stack Trace k8s.io/kubernetes/test/e2e.RunE2ETests(0x25761d7?) test/e2e/e2e.go:130 +0x6bb k8s.io/kubernetes/test/e2e.TestE2E(0x24e52d9?) test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc000583a00, 0x741f9a8) /usr/local/go/src/testing/testing.go:1446 +0x10b created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1493 +0x35f �[1mSTEP�[0m: Destroying namespace "cpu-resources-test-windows-6155" for this suite.
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-windows\]\s\[Feature\:Windows\]\sDensity\s\[Serial\]\s\[Slow\]\screate\sa\sbatch\sof\spods\slatency\/resource\sshould\sbe\swithin\slimit\swhen\screate\s10\spods\swith\s0s\sinterval$'
test/e2e/framework/framework.go:188 Jan 30 02:11:46.716: All nodes should be ready after test, Not ready nodes: ", capz-conf-zghb7" vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113from junit.kubetest.01.xml
[BeforeEach] [sig-windows] [Feature:Windows] Density [Serial] [Slow] test/e2e/windows/framework.go:28 [BeforeEach] [sig-windows] [Feature:Windows] Density [Serial] [Slow] test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 30 02:07:56.114: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename density-test-windows �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] latency/resource should be within limit when create 10 pods with 0s interval test/e2e/windows/density.go:68 �[1mSTEP�[0m: Creating a batch of pods �[1mSTEP�[0m: Waiting for all Pods to be observed by the watch... Jan 30 02:08:16.374: INFO: Waiting for pod test-c2f2186b-1f18-4256-827f-77e565642e8b to disappear Jan 30 02:08:16.383: INFO: Waiting for pod test-9e9a07f6-eb42-41f4-b368-3b42149a3f1b to disappear Jan 30 02:08:16.384: INFO: Waiting for pod test-0203a808-bae5-496e-a1c0-c7b1feaaf37c to disappear Jan 30 02:08:16.384: INFO: Waiting for pod test-e81ebf5b-0e08-4a46-ad82-c6bdc7e57066 to disappear Jan 30 02:08:16.385: INFO: Waiting for pod test-3eb66165-08ed-4718-87d6-a0b14f87cce8 to disappear Jan 30 02:08:16.401: INFO: Waiting for pod test-9a1741e3-7367-42a8-bcca-1458333cf320 to disappear Jan 30 02:08:16.408: INFO: Waiting for pod test-6bb38b82-b309-46f3-ab6c-0cade5f4a3dc to disappear Jan 30 02:08:16.408: INFO: Waiting for pod test-0ba272f2-7960-42e0-a169-c3e3597c4dbf to disappear Jan 30 02:08:16.408: INFO: Waiting for pod test-85e6f05d-369a-4d9e-adcd-cf9ef3e93126 to disappear Jan 30 02:08:16.414: INFO: Pod test-c2f2186b-1f18-4256-827f-77e565642e8b still exists Jan 30 02:08:16.415: INFO: Waiting for pod test-cc43c395-35a0-4b4b-afba-ceda83c765d4 to disappear Jan 30 02:08:16.430: INFO: Pod test-0203a808-bae5-496e-a1c0-c7b1feaaf37c still exists Jan 30 02:08:16.430: INFO: Pod test-3eb66165-08ed-4718-87d6-a0b14f87cce8 still exists Jan 30 02:08:16.431: INFO: Pod test-9e9a07f6-eb42-41f4-b368-3b42149a3f1b still exists Jan 30 02:08:16.431: INFO: Pod test-e81ebf5b-0e08-4a46-ad82-c6bdc7e57066 still exists Jan 30 02:08:16.440: INFO: Pod test-9a1741e3-7367-42a8-bcca-1458333cf320 still exists Jan 30 02:08:16.443: INFO: Pod test-6bb38b82-b309-46f3-ab6c-0cade5f4a3dc still exists Jan 30 02:08:16.450: INFO: Pod test-85e6f05d-369a-4d9e-adcd-cf9ef3e93126 still exists Jan 30 02:08:16.450: INFO: Pod test-0ba272f2-7960-42e0-a169-c3e3597c4dbf still exists Jan 30 02:08:16.453: INFO: Pod test-cc43c395-35a0-4b4b-afba-ceda83c765d4 still exists Jan 30 02:08:46.415: INFO: Waiting for pod test-c2f2186b-1f18-4256-827f-77e565642e8b to disappear Jan 30 02:08:46.431: INFO: Waiting for pod test-3eb66165-08ed-4718-87d6-a0b14f87cce8 to disappear Jan 30 02:08:46.432: INFO: Waiting for pod test-9e9a07f6-eb42-41f4-b368-3b42149a3f1b to disappear Jan 30 02:08:46.431: INFO: Waiting for pod test-0203a808-bae5-496e-a1c0-c7b1feaaf37c to disappear Jan 30 02:08:46.431: INFO: Waiting for pod test-e81ebf5b-0e08-4a46-ad82-c6bdc7e57066 to disappear Jan 30 02:08:46.441: INFO: Waiting for pod test-9a1741e3-7367-42a8-bcca-1458333cf320 to disappear Jan 30 02:08:46.444: INFO: Waiting for pod test-6bb38b82-b309-46f3-ab6c-0cade5f4a3dc to disappear Jan 30 02:08:46.446: INFO: Pod test-c2f2186b-1f18-4256-827f-77e565642e8b no longer exists Jan 30 02:08:46.451: INFO: Waiting for pod test-0ba272f2-7960-42e0-a169-c3e3597c4dbf to disappear Jan 30 02:08:46.451: INFO: Waiting for pod test-85e6f05d-369a-4d9e-adcd-cf9ef3e93126 to disappear Jan 30 02:08:46.453: INFO: Waiting for pod test-cc43c395-35a0-4b4b-afba-ceda83c765d4 to disappear Jan 30 02:08:46.463: INFO: Pod test-9e9a07f6-eb42-41f4-b368-3b42149a3f1b no longer exists Jan 30 02:08:46.463: INFO: Pod test-3eb66165-08ed-4718-87d6-a0b14f87cce8 no longer exists Jan 30 02:08:46.463: INFO: Pod test-0203a808-bae5-496e-a1c0-c7b1feaaf37c no longer exists Jan 30 02:08:46.464: INFO: Pod test-e81ebf5b-0e08-4a46-ad82-c6bdc7e57066 no longer exists Jan 30 02:08:46.472: INFO: Pod test-9a1741e3-7367-42a8-bcca-1458333cf320 no longer exists Jan 30 02:08:46.474: INFO: Pod test-6bb38b82-b309-46f3-ab6c-0cade5f4a3dc no longer exists Jan 30 02:08:46.501: INFO: Pod test-85e6f05d-369a-4d9e-adcd-cf9ef3e93126 no longer exists Jan 30 02:08:46.501: INFO: Pod test-0ba272f2-7960-42e0-a169-c3e3597c4dbf no longer exists Jan 30 02:08:46.501: INFO: Pod test-cc43c395-35a0-4b4b-afba-ceda83c765d4 no longer exists [AfterEach] [sig-windows] [Feature:Windows] Density [Serial] [Slow] test/e2e/framework/framework.go:188 Jan 30 02:08:46.501: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready Jan 30 02:08:46.649: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:08:48.688: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:08:50.682: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:08:52.683: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:08:54.683: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:08:56.684: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:08:58.683: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:09:00.684: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:09:02.683: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:09:04.682: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:09:06.684: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:09:08.683: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:09:10.682: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:09:12.683: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:09:14.682: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:09:16.682: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:09:18.684: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:09:20.682: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:09:22.682: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:09:24.683: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:09:26.684: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:09:28.682: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:09:30.683: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:09:32.682: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:09:34.683: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:09:36.683: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:09:38.682: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:09:40.682: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:09:42.684: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:09:44.684: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:09:46.683: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:09:48.682: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:09:50.683: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:09:52.683: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:09:54.682: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:09:56.683: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:09:58.682: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:10:00.683: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:10:02.683: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:10:04.684: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:10:06.684: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:10:08.682: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:10:10.681: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:10:12.684: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:10:14.687: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:10:16.684: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:10:18.685: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:10:20.682: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:10:22.683: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:10:24.683: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:10:26.683: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:10:28.682: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:10:30.682: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:10:32.683: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:10:34.683: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:10:36.684: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:10:38.683: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:10:40.682: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:10:42.684: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:10:44.684: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:10:46.682: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:10:48.682: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:10:50.682: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:10:52.684: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:10:54.683: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:10:56.683: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:10:58.687: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:11:00.682: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:11:02.683: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:11:04.682: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:11:06.685: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:11:08.684: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:11:10.685: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:11:12.683: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:11:14.683: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:11:16.682: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:11:18.682: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:11:20.683: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:11:22.683: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:11:24.682: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:11:26.683: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:11:28.684: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:11:30.683: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:11:32.683: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:11:34.685: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:11:36.687: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:11:38.682: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:11:40.686: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:11:42.683: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:11:44.683: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:11:46.684: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:11:46.716: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:11:46.716: FAIL: All nodes should be ready after test, Not ready nodes: ", capz-conf-zghb7" Full Stack Trace k8s.io/kubernetes/test/e2e.RunE2ETests(0x25761d7?) test/e2e/e2e.go:130 +0x6bb k8s.io/kubernetes/test/e2e.TestE2E(0x24e52d9?) test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc000583a00, 0x741f9a8) /usr/local/go/src/testing/testing.go:1446 +0x10b created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1493 +0x35f �[1mSTEP�[0m: Destroying namespace "density-test-windows-3086" for this suite.
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-windows\]\s\[Feature\:Windows\]\sGMSA\sKubelet\s\[Slow\]\skubelet\sGMSA\ssupport\swhen\screating\sa\spod\swith\scorrect\sGMSA\scredential\sspecs\spasses\sthe\scredential\sspecs\sdown\sto\sthe\sPod\'s\scontainers$'
test/e2e/framework/framework.go:188 Jan 30 02:14:54.327: All nodes should be ready after test, Not ready nodes: ", capz-conf-zghb7" vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113from junit.kubetest.01.xml
[BeforeEach] [sig-windows] [Feature:Windows] GMSA Kubelet [Slow] test/e2e/windows/framework.go:28 [BeforeEach] [sig-windows] [Feature:Windows] GMSA Kubelet [Slow] test/e2e/framework/framework.go:187 �[1mSTEP�[0m: Creating a kubernetes client Jan 30 02:11:46.758: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename gmsa-kubelet-test-windows �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] passes the credential specs down to the Pod's containers test/e2e/windows/gmsa_kubelet.go:45 �[1mSTEP�[0m: creating a pod with correct GMSA specs Jan 30 02:11:47.062: INFO: The status of Pod with-correct-gmsa-specs is Pending, waiting for it to be Running (with Ready = true) Jan 30 02:11:49.094: INFO: The status of Pod with-correct-gmsa-specs is Pending, waiting for it to be Running (with Ready = true) Jan 30 02:11:51.094: INFO: The status of Pod with-correct-gmsa-specs is Pending, waiting for it to be Running (with Ready = true) Jan 30 02:11:53.095: INFO: The status of Pod with-correct-gmsa-specs is Running (Ready = true) �[1mSTEP�[0m: checking the domain reported by nltest in the containers Jan 30 02:11:53.126: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=gmsa-kubelet-test-windows-4613 exec --namespace=gmsa-kubelet-test-windows-4613 with-correct-gmsa-specs --container=container1 -- nltest /PARENTDOMAIN' Jan 30 02:11:53.730: INFO: stderr: "" Jan 30 02:11:53.730: INFO: stdout: "acme.com. (1)\r\nThe command completed successfully\r\n" Jan 30 02:11:53.730: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=gmsa-kubelet-test-windows-4613 exec --namespace=gmsa-kubelet-test-windows-4613 with-correct-gmsa-specs --container=container2 -- nltest /PARENTDOMAIN' Jan 30 02:11:54.227: INFO: stderr: "" Jan 30 02:11:54.227: INFO: stdout: "contoso.org. (1)\r\nThe command completed successfully\r\n" [AfterEach] [sig-windows] [Feature:Windows] GMSA Kubelet [Slow] test/e2e/framework/framework.go:188 Jan 30 02:11:54.227: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready Jan 30 02:11:54.260: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:11:56.294: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:11:58.294: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:12:00.294: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:12:02.293: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:12:04.294: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:12:06.294: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:12:08.294: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:12:10.294: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:12:12.293: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:12:14.294: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:12:16.293: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:12:18.294: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:12:20.293: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:12:22.293: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:12:24.295: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:12:26.293: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:12:28.294: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:12:30.294: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:12:32.293: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:12:34.293: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:12:36.292: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:12:38.295: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:12:40.293: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:12:42.294: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:12:44.293: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:12:46.294: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:12:48.294: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:12:50.294: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:12:52.294: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:12:54.294: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:12:56.293: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:12:58.295: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:13:00.293: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:13:02.294: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:13:04.293: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:13:06.295: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:13:08.293: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:13:10.293: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:13:12.293: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:13:14.295: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:13:16.293: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:13:18.292: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:13:20.293: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:13:22.293: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:13:24.293: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:13:26.296: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:13:28.294: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:13:30.293: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:13:32.293: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:13:34.294: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:13:36.293: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:13:38.294: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:13:40.294: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:13:42.293: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:13:44.295: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:13:46.294: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:13:48.294: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:13:50.293: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:13:52.293: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:13:54.295: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:13:56.294: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:13:58.293: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:14:00.295: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:14:02.294: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:14:04.293: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:14:06.293: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:14:08.298: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:14:10.293: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:14:12.295: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:14:14.293: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:14:16.294: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:14:18.295: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:14:20.292: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:14:22.293: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:14:24.293: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:14:26.293: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:14:28.294: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:14:30.293: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:14:32.293: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:14:34.293: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:14:36.294: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:14:38.294: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:14:40.294: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:14:42.292: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:14:44.293: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:14:46.294: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:14:48.293: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:14:50.293: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:14:52.293: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:14:54.294: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:14:54.326: INFO: Condition Ready of node capz-conf-zghb7 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 01:17:54 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 01:17:59 +0000 UTC}]. Failure Jan 30 02:14:54.326: FAIL: All nodes should be ready after test, Not ready nodes: ", capz-conf-zghb7" Full Stack Trace k8s.io/kubernetes/test/e2e.RunE2ETests(0x25761d7?) test/e2e/e2e.go:130 +0x6bb k8s.io/kubernetes/test/e2e.TestE2E(0x24e52d9?) test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc000583a00, 0x741f9a8) /usr/local/go/src/testing/testing.go:1446 +0x10b created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1493 +0x35f �[1mSTEP�[0m: Destroying namespace "gmsa-kubelet-test-windows-4613" for this suite.
Filter through log files | View test history on testgrid
Kubernetes e2e suite [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]
Kubernetes e2e suite [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]
Kubernetes e2e suite [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
Kubernetes e2e suite [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]
Kubernetes e2e suite [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]
Kubernetes e2e suite [sig-api-machinery] Garbage collector should orphan pods created by rc if deleteOptions.OrphanDependents is nil
Kubernetes e2e suite [sig-api-machinery] Garbage collector should support cascading deletion of custom resources
Kubernetes e2e suite [sig-api-machinery] Garbage collector should support orphan deletion of custom resources
Kubernetes e2e suite [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]
Kubernetes e2e suite [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]
Kubernetes e2e suite [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]
Kubernetes e2e suite [sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]
Kubernetes e2e suite [sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]
Kubernetes e2e suite [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]
Kubernetes e2e suite [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]
Kubernetes e2e suite [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]
Kubernetes e2e suite [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]
Kubernetes e2e suite [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
Kubernetes e2e suite [sig-apps] Daemon set [Serial] should verify changes to a daemon set status [Conformance]
Kubernetes e2e suite [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
Kubernetes e2e suite [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) ReplicationController light Should scale from 2 pods to 1 pod [Slow]
Kubernetes e2e suite [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] Deployment Should scale from 1 pod to 3 pods and from 3 to 5
Kubernetes e2e suite [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] Deployment Should scale from 5 pods to 3 pods and from 3 to 1
Kubernetes e2e suite [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicaSet Should scale from 5 pods to 3 pods and from 3 to 1
Kubernetes e2e suite [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicaSet with idle sidecar (ContainerResource use case) Should not scale up on a busy sidecar with an idle application
Kubernetes e2e suite [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicaSet with idle sidecar (ContainerResource use case) Should scale from 1 pod to 3 pods and from 3 to 5 on a busy application with an idle sidecar container
Kubernetes e2e suite [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicationController Should scale from 1 pod to 3 pods and from 3 to 5 and verify decision stability
Kubernetes e2e suite [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicationController Should scale from 5 pods to 3 pods and from 3 to 1 and verify decision stability
Kubernetes e2e suite [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) with short downscale stabilization window should scale down soon after the stabilization period
Kubernetes e2e suite [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]
Kubernetes e2e suite [sig-node] NoExecuteTaintManager Single Pod [Serial] removing taint cancels eviction [Disruptive] [Conformance]
Kubernetes e2e suite [sig-node] Pods should have their auto-restart back-off timer reset on image update [Slow][NodeConformance]
Kubernetes e2e suite [sig-node] Variable Expansion should fail substituting values in a volume subpath with absolute path [Slow] [Conformance]
Kubernetes e2e suite [sig-node] Variable Expansion should fail substituting values in a volume subpath with backticks [Slow] [Conformance]
Kubernetes e2e suite [sig-node] Variable Expansion should succeed in writing subpaths in container [Slow] [Conformance]
Kubernetes e2e suite [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]
Kubernetes e2e suite [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]
Kubernetes e2e suite [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
Kubernetes e2e suite [sig-scheduling] SchedulerPreemption [Serial] PodTopologySpread Preemption validates proper pods are preempted
Kubernetes e2e suite [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance]
Kubernetes e2e suite [sig-scheduling] SchedulerPreemption [Serial] PriorityClass endpoints verify PriorityClass endpoints can be operated with different HTTP methods [Conformance]
Kubernetes e2e suite [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance]
Kubernetes e2e suite [sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]
Kubernetes e2e suite [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]
Kubernetes e2e suite [sig-windows] [Feature:Windows] Kubelet-Stats [Serial] Kubelet stats collection for Windows nodes when running 10 pods should return within 10 seconds
Kubernetes e2e suite [sig-windows] [Feature:Windows] Memory Limits [Serial] [Slow] Allocatable node memory should be equal to a calculated allocatable memory value
Kubernetes e2e suite [sig-windows] [Feature:Windows] Memory Limits [Serial] [Slow] attempt to deploy past allocatable memory limits should fail deployments of pods once there isn't enough memory
capz-e2e [It] Conformance Tests conformance-tests
capz-e2e [SynchronizedAfterSuite]
capz-e2e [SynchronizedBeforeSuite]
Kubernetes e2e suite [sig-api-machinery] API priority and fairness should ensure that requests can be classified by adding FlowSchema and PriorityLevelConfiguration
Kubernetes e2e suite [sig-api-machinery] API priority and fairness should ensure that requests can't be drowned out (fairness)
Kubernetes e2e suite [sig-api-machinery] API priority and fairness should ensure that requests can't be drowned out (priority)
Kubernetes e2e suite [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]
Kubernetes e2e suite [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]
Kubernetes e2e suite [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]
Kubernetes e2e suite [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]
Kubernetes e2e suite [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]
Kubernetes e2e suite [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]
Kubernetes e2e suite [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]
Kubernetes e2e suite [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]
Kubernetes e2e suite [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]
Kubernetes e2e suite [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]
Kubernetes e2e suite [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]
Kubernetes e2e suite [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]
Kubernetes e2e suite [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]
Kubernetes e2e suite [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]
Kubernetes e2e suite [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]
Kubernetes e2e suite [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
Kubernetes e2e suite [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]
Kubernetes e2e suite [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]
Kubernetes e2e suite [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]
Kubernetes e2e suite [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]
Kubernetes e2e suite [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]
Kubernetes e2e suite [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]
Kubernetes e2e suite [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]
Kubernetes e2e suite [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]
Kubernetes e2e suite [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]
Kubernetes e2e suite [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]
Kubernetes e2e suite [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] [Flaky] kubectl explain works for CR with the same resource name as built-in object.
Kubernetes e2e suite [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]
Kubernetes e2e suite [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]
Kubernetes e2e suite [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]
Kubernetes e2e suite [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]
Kubernetes e2e suite [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]
Kubernetes e2e suite [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]
Kubernetes e2e suite [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]
Kubernetes e2e suite [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]
Kubernetes e2e suite [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]
Kubernetes e2e suite [sig-api-machinery] CustomResourceValidationRules [Privileged:ClusterAdmin][Alpha][Feature:CustomResourceValidationExpressions] MUST NOT fail validation for create of a custom resource that satisfies the x-kubernetes-validator rules
Kubernetes e2e suite [sig-api-machinery] CustomResourceValidationRules [Privileged:ClusterAdmin][Alpha][Feature:CustomResourceValidationExpressions] MUST fail create of a custom resource definition that contains a x-kubernetes-validator rule that refers to a property that do not exist
Kubernetes e2e suite [sig-api-machinery] CustomResourceValidationRules [Privileged:ClusterAdmin][Alpha][Feature:CustomResourceValidationExpressions] MUST fail validation for create of a custom resource that does not satisfy the x-kubernetes-validator rules
Kubernetes e2e suite [sig-api-machinery] Discovery Custom resource should have storage version hash
Kubernetes e2e suite [sig-api-machinery] Discovery should accurately determine present and missing resources
Kubernetes e2e suite [sig-api-machinery] Discovery should validate PreferredVersion for each APIGroup [Conformance]
Kubernetes e2e suite [sig-api-machinery] Etcd failure [Disruptive] should recover from SIGKILL
Kubernetes e2e suite [sig-api-machinery] Etcd failure [Disruptive] should recover from network partition with master
Kubernetes e2e suite [sig-api-machinery] Generated clientset should create pods, set the deletionTimestamp and deletionGracePeriodSeconds of the pod
Kubernetes e2e suite [sig-api-machinery] Generated clientset should create v1 cronJobs, delete cronJobs, watch cronJobs
Kubernetes e2e suite [sig-api-machinery] Namespaces [Serial] should always delete fast (ALL of 100 namespaces in 150 seconds) [Feature:ComprehensiveNamespaceDraining]
Kubernetes e2e suite [sig-api-machinery] Namespaces [Serial] should delete fast enough (90 percent of 100 namespaces in 150 seconds)
Kubernetes e2e suite [sig-api-machinery] ResourceQuota [Feature:PodPriority] should verify ResourceQuota's multiple priority class scope (quota set to pod count: 2) against 2 pods with same priority classes.
Kubernetes e2e suite [sig-api-machinery] ResourceQuota [Feature:PodPriority] should verify ResourceQuota's priority class scope (cpu, memory quota set) against a pod with same priority class.
Kubernetes e2e suite [sig-api-machinery] ResourceQuota [Feature:PodPriority] should verify ResourceQuota's priority class scope (quota set to pod count: 1) against 2 pods with different priority class.
Kubernetes e2e suite [sig-api-machinery] ResourceQuota [Feature:PodPriority] should verify ResourceQuota's priority class scope (quota set to pod count: 1) against 2 pods with same priority class.
Kubernetes e2e suite [sig-api-machinery] ResourceQuota [Feature:PodPriority] should verify ResourceQuota's priority class scope (quota set to pod count: 1) against a pod with different priority class (ScopeSelectorOpExists).
Kubernetes e2e suite [sig-api-machinery] ResourceQuota [Feature:PodPriority] should verify ResourceQuota's priority class scope (quota set to pod count: 1) against a pod with different priority class (ScopeSelectorOpNotIn).
Kubernetes e2e suite [sig-api-machinery] ResourceQuota [Feature:PodPriority] should verify ResourceQuota's priority class scope (quota set to pod count: 1) against a pod with same priority class.
Kubernetes e2e suite [sig-api-machinery] ResourceQuota [Feature:ScopeSelectors] should verify ResourceQuota with best effort scope using scope-selectors.
Kubernetes e2e suite [sig-api-machinery] ResourceQuota [Feature:ScopeSelectors] should verify ResourceQuota with terminating scopes through scope selectors.
Kubernetes e2e suite [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]
Kubernetes e2e suite [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]
Kubernetes e2e suite [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a custom resource.
Kubernetes e2e suite [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a persistent volume claim
Kubernetes e2e suite [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a persistent volume claim with a storage class
Kubernetes e2e suite [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]
Kubernetes e2e suite [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]
Kubernetes e2e suite [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]
Kubernetes e2e suite [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]
Kubernetes e2e suite [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]
Kubernetes e2e suite [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
Kubernetes e2e suite [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]
Kubernetes e2e suite [sig-api-machinery] ResourceQuota should verify ResourceQuota with cross namespace pod affinity scope using scope-selectors.
Kubernetes e2e suite [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]
Kubernetes e2e suite [sig-api-machinery] Server request timeout default timeout should be used if the specified timeout in the request URL is 0s
Kubernetes e2e suite [sig-api-machinery] Server request timeout should return HTTP status code 400 if the user specifies an invalid timeout in the request URL
Kubernetes e2e suite [sig-api-machinery] Server request timeout the request should be served with a default timeout if the specified timeout in the request URL exceeds maximum allowed
Kubernetes e2e suite [sig-api-machinery] ServerSideApply should create an applied object if it does not already exist
Kubernetes e2e suite [sig-api-machinery] ServerSideApply should give up ownership of a field if forced applied by a controller
Kubernetes e2e suite [sig-api-machinery] ServerSideApply should ignore conflict errors if force apply is used
Kubernetes e2e suite [sig-api-machinery] ServerSideApply should not remove a field if an owner unsets the field but other managers still have ownership of the field
Kubernetes e2e suite [sig-api-machinery] ServerSideApply should remove a field if it is owned but removed in the apply request
Kubernetes e2e suite [sig-api-machinery] ServerSideApply should work for CRDs
Kubernetes e2e suite [sig-api-machinery] ServerSideApply should work for subresources
Kubernetes e2e suite [sig-api-machinery] Servers with support for API chunking should return chunks of results for list calls
Kubernetes e2e suite [sig-api-machinery] Servers with support for API chunking should support continue listing from the last key if the original version has been compacted away, though the list is inconsistent [Slow]
Kubernetes e2e suite [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]
Kubernetes e2e suite [sig-api-machinery] Servers with support for Table transformation should return chunks of table results for list calls
Kubernetes e2e suite [sig-api-machinery] Servers with support for Table transformation should return generic metadata details across all namespaces for nodes
Kubernetes e2e suite [sig-api-machinery] Servers with support for Table transformation should return pod details
Kubernetes e2e suite [sig-api-machinery] StorageVersion resources [Feature:StorageVersionAPI] storage version with non-existing id should be GC'ed
Kubernetes e2e suite [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]
Kubernetes e2e suite [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]
Kubernetes e2e suite [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]
Kubernetes e2e suite [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
Kubernetes e2e suite [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]
Kubernetes e2e suite [sig-api-machinery] client-go should negotiate watch and report errors with accept "application/json"
Kubernetes e2e suite [sig-api-machinery] client-go should negotiate watch and report errors with accept "application/json,application/vnd.kubernetes.protobuf"
Kubernetes e2e suite [sig-api-machinery] client-go should negotiate watch and report errors with accept "application/vnd.kubernetes.protobuf"
Kubernetes e2e suite [sig-api-machinery] client-go should negotiate watch and report errors with accept "application/vnd.kubernetes.protobuf,application/json"
Kubernetes e2e suite [sig-api-machinery] health handlers should contain necessary checks
Kubernetes e2e suite [sig-api-machinery] server version should find the server version [Conformance]
Kubernetes e2e suite [sig-apps] CronJob should be able to schedule after more than 100 missed schedule
Kubernetes e2e suite [sig-apps] CronJob should delete failed finished jobs with limit of one job
Kubernetes e2e suite [sig-apps] CronJob should delete successful finished jobs with limit of one successful job
Kubernetes e2e suite [sig-apps] CronJob should not emit unexpected warnings
Kubernetes e2e suite [sig-apps] CronJob should remove from active list jobs that have been deleted
Kubernetes e2e suite [sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]
Kubernetes e2e suite [sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]
Kubernetes e2e suite [sig-apps] CronJob should support CronJob API operations [Conformance]
Kubernetes e2e suite [sig-apps] Daemon set [Serial] should not update pod when spec was updated and update strategy is OnDelete
Kubernetes e2e suite [sig-apps] Daemon set [Serial] should run and stop complex daemon with node affinity
Kubernetes e2e suite [sig-apps] Daemon set [Serial] should surge pods onto nodes when spec was updated and update strategy is RollingUpdate
Kubernetes e2e suite [sig-apps] DaemonRestart [Disruptive] Controller Manager should not create/delete replicas across restart
Kubernetes e2e suite [sig-apps] DaemonRestart [Disruptive] Kube-proxy should recover after being killed accidentally
Kubernetes e2e suite [sig-apps] DaemonRestart [Disruptive] Kubelet should not restart containers across restart
Kubernetes e2e suite [sig-apps] DaemonRestart [Disruptive] Scheduler should continue assigning pods to nodes across restart
Kubernetes e2e suite [sig-apps] Deployment Deployment should have a working scale subresource [Conformance]
Kubernetes e2e suite [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]
Kubernetes e2e suite [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]
Kubernetes e2e suite [sig-apps] Deployment deployment reaping should cascade to its replica sets and pods
Kubernetes e2e suite [sig-apps] Deployment deployment should delete old replica sets [Conformance]
Kubernetes e2e suite [sig-apps] Deployment deployment should support proportional scaling [Conformance]
Kubernetes e2e suite [sig-apps] Deployment deployment should support rollover [Conformance]
Kubernetes e2e suite [sig-apps] Deployment iterative rollouts should eventually progress
Kubernetes e2e suite [sig-apps] Deployment should not disrupt a cloud load-balancer's connectivity during rollout
Kubernetes e2e suite [sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]
Kubernetes e2e suite [sig-apps] Deployment should validate Deployment Status endpoints [Conformance]
Kubernetes e2e suite [sig-apps] Deployment test Deployment ReplicaSet orphaning and adoption regarding controllerRef
Kubernetes e2e suite [sig-apps] DisruptionController Listing PodDisruptionBudgets for all namespaces should list and delete a collection of PodDisruptionBudgets [Conformance]
Kubernetes e2e suite [sig-apps] DisruptionController evictions: enough pods, absolute => should allow an eviction
Kubernetes e2e suite [sig-apps] DisruptionController evictions: enough pods, replicaSet, percentage => should allow an eviction
Kubernetes e2e suite [sig-apps] DisruptionController evictions: maxUnavailable allow single eviction, percentage => should allow an eviction
Kubernetes e2e suite [sig-apps] DisruptionController evictions: maxUnavailable deny evictions, integer => should not allow an eviction [Serial]
Kubernetes e2e suite [sig-apps] DisruptionController evictions: no PDB => should allow an eviction
Kubernetes e2e suite [sig-apps] DisruptionController evictions: too few pods, absolute => should not allow an eviction
Kubernetes e2e suite [sig-apps] DisruptionController evictions: too few pods, replicaSet, percentage => should not allow an eviction [Serial]
Kubernetes e2e suite [sig-apps] DisruptionController should block an eviction until the PDB is updated to allow it [Conformance]
Kubernetes e2e suite [sig-apps] DisruptionController should create a PodDisruptionBudget [Conformance]
Kubernetes e2e suite [sig-apps] DisruptionController should observe PodDisruptionBudget status updated [Conformance]
Kubernetes e2e suite [sig-apps] DisruptionController should observe that the PodDisruptionBudget status is not updated for unmanaged pods
Kubernetes e2e suite [sig-apps] DisruptionController should update/patch PodDisruptionBudget status [Conformance]
Kubernetes e2e suite [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]
Kubernetes e2e suite [sig-apps] Job should apply changes to a job status [Conformance]
Kubernetes e2e suite [sig-apps] Job should create pods for an Indexed job with completion indexes and specified hostname [Conformance]
Kubernetes e2e suite [sig-apps] Job should delete a job [Conformance]
Kubernetes e2e suite [sig-apps] Job should delete pods when suspended
Kubernetes e2e suite [sig-apps] Job should fail to exceed backoffLimit
Kubernetes e2e suite [sig-apps] Job should fail when exceeds active deadline
Kubernetes e2e suite [sig-apps] Job should manage the lifecycle of a job
Kubernetes e2e suite [sig-apps] Job should not create pods when created in suspend state
Kubernetes e2e suite [sig-apps] Job should remove pods when job is deleted
Kubernetes e2e suite [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
Kubernetes e2e suite [sig-apps] Job should run a job to completion when tasks sometimes fail and are not locally restarted
Kubernetes e2e suite [sig-apps] Job should run a job to completion when tasks succeed
Kubernetes e2e suite [sig-apps] Job should run a job to completion with CPU requests [Serial]
Kubernetes e2e suite [sig-apps] ReplicaSet Replace and Patch tests [Conformance]
Kubernetes e2e suite [sig-apps] ReplicaSet Replicaset should have a working scale subresource [Conformance]
Kubernetes e2e suite [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]
Kubernetes e2e suite [sig-apps] ReplicaSet should list and delete a collection of ReplicaSets [Conformance]
Kubernetes e2e suite [sig-apps] ReplicaSet should serve a basic image on each replica with a private image
Kubernetes e2e suite [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]
Kubernetes e2e suite [sig-apps] ReplicaSet should surface a failure condition on a common issue like exceeded quota
Kubernetes e2e suite [sig-apps] ReplicaSet should validate Replicaset Status endpoints [Conformance]
Kubernetes e2e suite [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]
Kubernetes e2e suite [sig-apps] ReplicationController should release no longer matching pods [Conformance]
Kubernetes e2e suite [sig-apps] ReplicationController should serve a basic image on each replica with a private image
Kubernetes e2e suite [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]
Kubernetes e2e suite [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]
Kubernetes e2e suite [sig-apps] ReplicationController should test the lifecycle of a ReplicationController [Conformance]
Kubernetes e2e suite [sig-apps] StatefulSet AvailableReplicas should get updated accordingly when MinReadySeconds is enabled
Kubernetes e2e suite [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]
Kubernetes e2e suite [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should adopt matching orphans and release non-matching pods
Kubernetes e2e suite [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]
Kubernetes e2e suite [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should implement legacy replacement when the update strategy is OnDelete
Kubernetes e2e suite [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should list, patch and delete a collection of StatefulSets [Conformance]
Kubernetes e2e suite [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should not deadlock when a pod's predecessor fails
Kubernetes e2e suite [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]
Kubernetes e2e suite [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]
Kubernetes e2e suite [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications with PVCs
Kubernetes e2e suite [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should provide basic identity
Kubernetes e2e suite [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should validate Statefulset Status endpoints [Conformance]
Kubernetes e2e suite [sig-apps] StatefulSet Deploy clustered applications [Feature:StatefulSet] [Slow] should creating a working CockroachDB cluster
Kubernetes e2e suite [sig-apps] StatefulSet Deploy clustered applications [Feature:StatefulSet] [Slow] should creating a working mysql cluster
Kubernetes e2e suite [sig-apps] StatefulSet Deploy clustered applications [Feature:StatefulSet] [Slow] should creating a working redis cluster
Kubernetes e2e suite [sig-apps] StatefulSet Deploy clustered applications [Feature:StatefulSet] [Slow] should creating a working zookeeper cluster
Kubernetes e2e suite [sig-apps] StatefulSet MinReadySeconds should be honored when enabled
Kubernetes e2e suite [sig-apps] StatefulSet Non-retain StatefulSetPersistentVolumeClaimPolicy [Feature:StatefulSetAutoDeletePVC] should delete PVCs after adopting pod (WhenDeleted)
Kubernetes e2e suite [sig-apps] StatefulSet Non-retain StatefulSetPersistentVolumeClaimPolicy [Feature:StatefulSetAutoDeletePVC] should delete PVCs after adopting pod (WhenScaled) [Feature:StatefulSetAutoDeletePVC]
Kubernetes e2e suite [sig-apps] StatefulSet Non-retain StatefulSetPersistentVolumeClaimPolicy [Feature:StatefulSetAutoDeletePVC] should delete PVCs with a OnScaledown policy
Kubernetes e2e suite [sig-apps] StatefulSet Non-retain StatefulSetPersistentVolumeClaimPolicy [Feature:StatefulSetAutoDeletePVC] should delete PVCs with a WhenDeleted policy
Kubernetes e2e suite [sig-apps] TTLAfterFinished job should be deleted once it finishes after TTL seconds
Kubernetes e2e suite [sig-apps] stateful Upgrade [Feature:StatefulUpgrade] stateful upgrade should maintain a functioning cluster
Kubernetes e2e suite [sig-architecture] Conformance Tests should have at least two untainted nodes [Conformance]
Kubernetes e2e suite [sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]
Kubernetes e2e suite [sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR
Kubernetes e2e suite [sig-auth] PodSecurityPolicy [Feature:PodSecurityPolicy] should allow pods under the privileged policy.PodSecurityPolicy
Kubernetes e2e suite [sig-auth] PodSecurityPolicy [Feature:PodSecurityPolicy] should enforce the restricted policy.PodSecurityPolicy
Kubernetes e2e suite [sig-auth] PodSecurityPolicy [Feature:PodSecurityPolicy] should forbid pod creation when no PSP is available
Kubernetes e2e suite [sig-auth] ServiceAccount admission controller migration [Feature:BoundServiceAccountTokenVolume] master upgrade should maintain a functioning cluster
Kubernetes e2e suite [sig-auth] ServiceAccounts ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance]
Kubernetes e2e suite [sig-auth] ServiceAccounts no secret-based service account token should be auto-generated
Kubernetes e2e suite [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]
Kubernetes e2e suite [sig-auth] ServiceAccounts should guarantee kube-root-ca.crt exist in any namespace [Conformance]
Kubernetes e2e suite [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]
Kubernetes e2e suite [sig-auth] ServiceAccounts should mount projected service account token [Conformance]
Kubernetes e2e suite [sig-auth] ServiceAccounts should run through the lifecycle of a ServiceAccount [Conformance]
Kubernetes e2e suite [sig-auth] ServiceAccounts should set ownership and permission when RunAsUser or FsGroup is present [LinuxOnly] [NodeFeature:FSGroup]
Kubernetes e2e suite [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow]
Kubernetes e2e suite [sig-auth] [Feature:NodeAuthenticator] The kubelet can delegate ServiceAccount tokens to the API server
Kubernetes e2e suite [sig-auth] [Feature:NodeAuthenticator] The kubelet's main port 10250 should reject requests with no credentials
Kubernetes e2e suite [sig-auth] [Feature:NodeAuthorizer] A node shouldn't be able to create another node
Kubernetes e2e suite [sig-auth] [Feature:NodeAuthorizer] A node shouldn't be able to delete another node
Kubernetes e2e suite [sig-auth] [Feature:NodeAuthorizer] Getting a non-existent configmap should exit with the Forbidden error, not a NotFound error
Kubernetes e2e suite [sig-auth] [Feature:NodeAuthorizer] Getting a non-existent secret should exit with the Forbidden error, not a NotFound error
Kubernetes e2e suite [sig-auth] [Feature:NodeAuthorizer] Getting a secret for a workload the node has access to should succeed
Kubernetes e2e suite [sig-auth] [Feature:NodeAuthorizer] Getting an existing configmap should exit with the Forbidden error
Kubernetes e2e suite [sig-auth] [Feature:NodeAuthorizer] Getting an existing secret should exit with the Forbidden error
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaler scalability [Slow] CA ignores unschedulable pods while scheduling schedulable pods [Feature:ClusterAutoscalerScalability6]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaler scalability [Slow] should scale down empty nodes [Feature:ClusterAutoscalerScalability3]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaler scalability [Slow] should scale down underutilized nodes [Feature:ClusterAutoscalerScalability4]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaler scalability [Slow] should scale up at all [Feature:ClusterAutoscalerScalability1]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaler scalability [Slow] should scale up twice [Feature:ClusterAutoscalerScalability2]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaler scalability [Slow] shouldn't scale down with underutilized nodes due to host port conflicts [Feature:ClusterAutoscalerScalability5]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] Should be able to scale a node group down to 0[Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] Should be able to scale a node group up from 0[Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] Should not scale GPU pool up if pod does not require GPUs [GpuType:] [Feature:ClusterSizeAutoscalingGpu]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] Should scale down GPU pool from 1 [GpuType:] [Feature:ClusterSizeAutoscalingGpu]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] Should scale up GPU pool from 0 [GpuType:] [Feature:ClusterSizeAutoscalingGpu]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] Should scale up GPU pool from 1 [GpuType:] [Feature:ClusterSizeAutoscalingGpu]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] Shouldn't perform scale up operation and should list unhealthy status if most of the cluster is broken[Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] should add node to the particular mig [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] should be able to scale down by draining multiple pods one by one as dictated by pdb[Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] should be able to scale down by draining system pods with pdb[Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] should be able to scale down when rescheduling a pod is required and pdb allows for it[Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] should correctly scale down after a node is not needed [Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] should correctly scale down after a node is not needed and one node is broken [Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] should correctly scale down after a node is not needed when there is non autoscaled pool[Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] should disable node pool autoscaling [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] should increase cluster size if pending pods are small [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] should increase cluster size if pending pods are small and one node is broken [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] should increase cluster size if pending pods are small and there is another node pool that is not autoscaled [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] should increase cluster size if pod requesting EmptyDir volume is pending [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] should increase cluster size if pod requesting volume is pending [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] should increase cluster size if pods are pending due to host port conflict [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] should increase cluster size if pods are pending due to pod anti-affinity [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] should scale down when expendable pod is running [Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] should scale up correct target pool [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] should scale up when non expendable pod is created [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't be able to scale down when rescheduling a pod is required, but pdb doesn't allow drain[Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't increase cluster size if pending pod is too large [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't scale down when non expendable pod is running [Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't scale up when expendable pod is created [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't scale up when expendable pod is preempted [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't trigger additional scale-ups during processing scale-up [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [sig-autoscaling] DNS horizontal autoscaling [Serial] [Slow] kube-dns-autoscaler should scale kube-dns pods when cluster size changed
Kubernetes e2e suite [sig-autoscaling] DNS horizontal autoscaling kube-dns-autoscaler should scale kube-dns pods in both nonfaulty and faulty scenarios
Kubernetes e2e suite [sig-autoscaling] [Feature:ClusterSizeAutoscalingScaleUp] [Slow] Autoscaling Autoscaling a service from 1 pod and 3 nodes to 8 pods and >=4 nodes takes less than 15 minutes
Kubernetes e2e suite [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) ReplicationController light Should scale from 1 pod to 2 pods
Kubernetes e2e suite [sig-autoscaling] [HPA] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) should scale down with Custom Metric of type Object from Stackdriver [Feature:CustomMetricsAutoscaling]
Kubernetes e2e suite [sig-autoscaling] [HPA] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) should scale down with Custom Metric of type Pod from Stackdriver [Feature:CustomMetricsAutoscaling]
Kubernetes e2e suite [sig-autoscaling] [HPA] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) should scale down with Custom Metric of type Pod from Stackdriver with Prometheus [Feature:CustomMetricsAutoscaling]
Kubernetes e2e suite [sig-autoscaling] [HPA] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) should scale down with External Metric with target average value from Stackdriver [Feature:CustomMetricsAutoscaling]
Kubernetes e2e suite [sig-autoscaling] [HPA] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) should scale down with External Metric with target value from Stackdriver [Feature:CustomMetricsAutoscaling]
Kubernetes e2e suite [sig-autoscaling] [HPA] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) should scale up with two External metrics from Stackdriver [Feature:CustomMetricsAutoscaling]
Kubernetes e2e suite [sig-autoscaling] [HPA] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) should scale up with two metrics of type Pod from Stackdriver [Feature:CustomMetricsAutoscaling]
Kubernetes e2e suite [sig-cli] Kubectl Port forwarding With a server listening on 0.0.0.0 should support forwarding over websockets
Kubernetes e2e suite [sig-cli] Kubectl Port forwarding With a server listening on 0.0.0.0 that expects NO client request should support a client that connects, sends DATA, and disconnects
Kubernetes e2e suite [sig-cli] Kubectl Port forwarding With a server listening on 0.0.0.0 that expects a client request should support a client that connects, sends DATA, and disconnects
Kubernetes e2e suite [sig-cli] Kubectl Port forwarding With a server listening on 0.0.0.0 that expects a client request should support a client that connects, sends NO DATA, and disconnects
Kubernetes e2e suite [sig-cli] Kubectl Port forwarding With a server listening on localhost should support forwarding over websockets
Kubernetes e2e suite [sig-cli] Kubectl Port forwarding With a server listening on localhost that expects NO client request should support a client that connects, sends DATA, and disconnects
Kubernetes e2e suite [sig-cli] Kubectl Port forwarding With a server listening on localhost that expects a client request should support a client that connects, sends DATA, and disconnects
Kubernetes e2e suite [sig-cli] Kubectl Port forwarding With a server listening on localhost that expects a client request should support a client that connects, sends NO DATA, and disconnects
Kubernetes e2e suite [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl apply apply set/view last-applied
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl apply should apply a new configuration to an existing RC
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl apply should reuse port when apply to an existing SVC
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl cluster-info dump should check if cluster-info dump succeeds
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes control plane services is included in cluster-info [Conformance]
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl copy should copy a file from a running Pod
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl create quota should create a quota with scopes
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl create quota should create a quota without scopes
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl create quota should reject quota with invalid scopes
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for cronjob
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl get componentstatuses should get componentstatuses
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl taint [Serial] should remove all the taints with the same key off a node
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl taint [Serial] should update the taint on a node
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl validation should create/apply a CR with unknown fields for CRD with no validation schema
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl validation should create/apply a valid CR for CRD with validation schema
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl validation should create/apply a valid CR with arbitrary-extra properties for CRD with partially-specified validation schema
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]
Kubernetes e2e suite [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]
Kubernetes e2e suite [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]
Kubernetes e2e suite [sig-cli] Kubectl client Simple pod should contain last line of the log
Kubernetes e2e suite [sig-cli] Kubectl client Simple pod should handle in-cluster config
Kubernetes e2e suite [sig-cli] Kubectl client Simple pod should return command exit codes [Slow] running a failing command with --leave-stdin-open
Kubernetes e2e suite [sig-cli] Kubectl client Simple pod should return command exit codes [Slow] running a failing command without --restart=Never
Kubernetes e2e suite [sig-cli] Kubectl client Simple pod should return command exit codes [Slow] running a failing command without --restart=Never, but with --rm
Kubernetes e2e suite [sig-cli] Kubectl client Simple pod should return command exit codes execing into a container with a failing command
Kubernetes e2e suite [sig-cli] Kubectl client Simple pod should return command exit codes execing into a container with a successful command
Kubernetes e2e suite [sig-cli] Kubectl client Simple pod should return command exit codes running a failing command
Kubernetes e2e suite [sig-cli] Kubectl client Simple pod should return command exit codes running a successful command
Kubernetes e2e suite [sig-cli] Kubectl client Simple pod should support exec
Kubernetes e2e suite [sig-cli] Kubectl client Simple pod should support exec through an HTTP proxy
Kubernetes e2e suite [sig-cli] Kubectl client Simple pod should support exec through kubectl proxy
Kubernetes e2e suite [sig-cli] Kubectl client Simple pod should support exec using resource/name
Kubernetes e2e suite [sig-cli] Kubectl client Simple pod should support inline execution and attach
Kubernetes e2e suite [sig-cli] Kubectl client Simple pod should support port-forward
Kubernetes e2e suite [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]
Kubernetes e2e suite [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]
Kubernetes e2e suite [sig-cli] Kubectl client kubectl wait should ignore not found error with --for=delete
Kubernetes e2e suite [sig-cloud-provider-gcp] Addon update should propagate add-on file changes [Slow]
Kubernetes e2e suite [sig-cloud-provider-gcp] Downgrade [Feature:Downgrade] cluster downgrade should maintain a functioning cluster [Feature:ClusterDowngrade]
Kubernetes e2e suite [sig-cloud-provider-gcp] GKE node pools [Feature:GKENodePool] should create a cluster with multiple node pools [Feature:GKENodePool]
Kubernetes e2e suite [sig-cloud-provider-gcp] HA-master [Feature:HAMaster] survive addition/removal replicas different zones [Serial][Disruptive]
Kubernetes e2e suite [sig-cloud-provider-gcp] HA-master [Feature:HAMaster] survive addition/removal replicas multizone workers [Serial][Disruptive]
Kubernetes e2e suite [sig-cloud-provider-gcp] HA-master [Feature:HAMaster] survive addition/removal replicas same zone [Serial][Disruptive]
Kubernetes e2e suite [sig-cloud-provider-gcp] Nodes [Disruptive] Resize [Slow] should be able to add nodes
Kubernetes e2e suite [sig-cloud-provider-gcp] Nodes [Disruptive] Resize [Slow] should be able to delete nodes
Kubernetes e2e suite [sig-cloud-provider-gcp] Ports Security Check [Feature:KubeletSecurity] should not be able to proxy to cadvisor port 4194 using proxy subresource
Kubernetes e2e suite [sig-cloud-provider-gcp] Ports Security Check [Feature:KubeletSecurity] should not be able to proxy to the readonly kubelet port 10255 using proxy subresource
Kubernetes e2e suite [sig-cloud-provider-gcp] Ports Security Check [Feature:KubeletSecurity] should not have port 10255 open on its all public IP addresses
Kubernetes e2e suite [sig-cloud-provider-gcp] Ports Security Check [Feature:KubeletSecurity] should not have port 4194 open on its all public IP addresses
Kubernetes e2e suite [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] each node by dropping all inbound packets for a while and ensure they function afterwards
Kubernetes e2e suite [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] each node by dropping all outbound packets for a while and ensure they function afterwards
Kubernetes e2e suite [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] each node by ordering clean reboot and ensure they function upon restart
Kubernetes e2e suite [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] each node by ordering unclean reboot and ensure they function upon restart
Kubernetes e2e suite [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] each node by switching off the network interface and ensure they function upon switch on
Kubernetes e2e suite [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] each node by triggering kernel panic and ensure they function upon restart
Kubernetes e2e suite [sig-cloud-provider-gcp] Recreate [Feature:Recreate] recreate nodes and ensure they function upon restart
Kubernetes e2e suite [sig-cloud-provider-gcp] Restart [Disruptive] should restart all nodes and ensure all nodes and pods recover
Kubernetes e2e suite [sig-cloud-provider-gcp] Upgrade [Feature:Upgrade] cluster upgrade should maintain a functioning cluster [Feature:ClusterUpgrade]
Kubernetes e2e suite [sig-cloud-provider-gcp] Upgrade [Feature:Upgrade] master upgrade should maintain a functioning cluster [Feature:MasterUpgrade]
Kubernetes e2e suite [sig-cloud-provider-gcp] [Disruptive]NodeLease NodeLease deletion node lease should be deleted when corresponding node is deleted
Kubernetes e2e suite [sig-cloud-provider] [Feature:CloudProvider][Disruptive] Nodes should be deleted on API server if it doesn't exist in the cloud provider
Kubernetes e2e suite [sig-cluster-lifecycle] [Feature:BootstrapTokens] should delete the signed bootstrap tokens from clusterInfo ConfigMap when bootstrap token is deleted
Kubernetes e2e suite [sig-cluster-lifecycle] [Feature:BootstrapTokens] should delete the token secret when the secret expired
Kubernetes e2e suite [sig-cluster-lifecycle] [Feature:BootstrapTokens] should not delete the token secret when the secret is not expired
Kubernetes e2e suite [sig-cluster-lifecycle] [Feature:BootstrapTokens] should resign the bootstrap tokens when the clusterInfo ConfigMap updated [Serial][Disruptive]
Kubernetes e2e suite [sig-cluster-lifecycle] [Feature:BootstrapTokens] should sign the new added bootstrap tokens
Kubernetes e2e suite [sig-instrumentation] Events API should delete a collection of events [Conformance]
Kubernetes e2e suite [sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]
Kubernetes e2e suite [sig-instrumentation] Events should delete a collection of events [Conformance]
Kubernetes e2e suite [sig-instrumentation] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance]
Kubernetes e2e suite [sig-instrumentation] Logging soak [Performance] [Slow] [Disruptive] should survive logging 1KB every 1s seconds, for a duration of 2m0s
Kubernetes e2e suite [sig-instrumentation] MetricsGrabber should grab all metrics from API server.
Kubernetes e2e suite [sig-instrumentation] MetricsGrabber should grab all metrics from a ControllerManager.
Kubernetes e2e suite [sig-instrumentation] MetricsGrabber should grab all metrics from a Kubelet.
Kubernetes e2e suite [sig-instrumentation] MetricsGrabber should grab all metrics from a Scheduler.
Kubernetes e2e suite [sig-instrumentation] Stackdriver Monitoring should have accelerator metrics [Feature:StackdriverAcceleratorMonitoring]
Kubernetes e2e suite [sig-instrumentation] Stackdriver Monitoring should have cluster metrics [Feature:StackdriverMonitoring]
Kubernetes e2e suite [sig-instrumentation] Stackdriver Monitoring should run Custom Metrics - Stackdriver Adapter for external metrics [Feature:StackdriverExternalMetrics]
Kubernetes e2e suite [sig-instrumentation] Stackdriver Monitoring should run Custom Metrics - Stackdriver Adapter for new resource model [Feature:StackdriverCustomMetrics]
Kubernetes e2e suite [sig-instrumentation] Stackdriver Monitoring should run Custom Metrics - Stackdriver Adapter for old resource model [Feature:StackdriverCustomMetrics]
Kubernetes e2e suite [sig-instrumentation] Stackdriver Monitoring should run Stackdriver Metadata Agent [Feature:StackdriverMetadataAgent]
Kubernetes e2e suite [sig-network] CVE-2021-29923 IPv4 Service Type ClusterIP with leading zeros should work interpreted as decimal
Kubernetes e2e suite [sig-network] ClusterDns [Feature:Example] should create pod that uses dns
Kubernetes e2e suite [sig-network] Conntrack should be able to preserve UDP traffic when initial unready endpoints get ready
Kubernetes e2e suite [sig-network] Conntrack should be able to preserve UDP traffic when server pod cycles for a ClusterIP service
Kubernetes e2e suite [sig-network] Conntrack should be able to preserve UDP traffic when server pod cycles for a NodePort service
Kubernetes e2e suite [sig-network] Conntrack should drop INVALID conntrack entries [Privileged]
Kubernetes e2e suite [sig-network] DNS configMap nameserver Change stubDomain should be able to change stubDomain configuration [Slow][Serial]
Kubernetes e2e suite [sig-network] DNS configMap nameserver Forward PTR lookup should forward PTR records lookup to upstream nameserver [Slow][Serial]
Kubernetes e2e suite [sig-network] DNS configMap nameserver Forward external name lookup should forward externalname lookup to upstream nameserver [Slow][Serial]
Kubernetes e2e suite [sig-network] DNS should provide /etc/hosts entries for the cluster [Conformance]
Kubernetes e2e suite [sig-network] DNS should provide DNS for ExternalName services [Conformance]
Kubernetes e2e suite [sig-network] DNS should provide DNS for pods for Hostname [Conformance]
Kubernetes e2e suite [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]
Kubernetes e2e suite [sig-network] DNS should provide DNS for services [Conformance]
Kubernetes e2e suite [sig-network] DNS should provide DNS for the cluster [Conformance]
Kubernetes e2e suite [sig-network] DNS should provide DNS for the cluster [Provider:GCE]
Kubernetes e2e suite [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
Kubernetes e2e suite [sig-network] DNS should resolve DNS of partial qualified names for the cluster [LinuxOnly]
Kubernetes e2e suite [sig-network] DNS should support configurable pod DNS nameservers [Conformance]
Kubernetes e2e suite [sig-network] DNS should support configurable pod resolv.conf
Kubernetes e2e suite [sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]
Kubernetes e2e suite [sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]
Kubernetes e2e suite [sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]
Kubernetes e2e suite [sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance]
Kubernetes e2e suite [sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]
Kubernetes e2e suite [sig-network] Firewall rule [Slow] [Serial] should create valid firewall rules for LoadBalancer type service
Kubernetes e2e suite [sig-network] Firewall rule control plane should not expose well-known ports
Kubernetes e2e suite [sig-network] Firewall rule should have correct firewall rules for e2e cluster
Kubernetes e2e suite [sig-network] HostPort validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance]
Kubernetes e2e suite [sig-network] Ingress API should support creating Ingress API operations [Conformance]
Kubernetes e2e suite [sig-network] IngressClass API should support creating IngressClass API operations [Conformance]
Kubernetes e2e suite [sig-network] IngressClass [Feature:Ingress] should allow IngressClass to have Namespace-scoped parameters [Serial]
Kubernetes e2e suite [sig-network] IngressClass [Feature:Ingress] should not set default value if no default IngressClass [Serial]
Kubernetes e2e suite [sig-network] IngressClass [Feature:Ingress] should prevent Ingress creation if more than 1 IngressClass marked as default [Serial]
Kubernetes e2e suite [sig-network] IngressClass [Feature:Ingress] should set default value on new IngressClass [Serial]
Kubernetes e2e suite [sig-network] KubeProxy should set TCP CLOSE_WAIT timeout [Privileged]
Kubernetes e2e suite [sig-network] LoadBalancers ESIPP [Slow] should handle updates to ExternalTrafficPolicy field
Kubernetes e2e suite [sig-network] LoadBalancers ESIPP [Slow] should only target nodes with endpoints
Kubernetes e2e suite [sig-network] LoadBalancers ESIPP [Slow] should work for type=LoadBalancer
Kubernetes e2e suite [sig-network] LoadBalancers ESIPP [Slow] should work for type=NodePort
Kubernetes e2e suite [sig-network] LoadBalancers ESIPP [Slow] should work from pods
Kubernetes e2e suite [sig-network] LoadBalancers should be able to change the type and ports of a TCP service [Slow]
Kubernetes e2e suite [sig-network] LoadBalancers should be able to change the type and ports of a UDP service [Slow]
Kubernetes e2e suite [sig-network] LoadBalancers should be able to create LoadBalancer Service without NodePort and change it [Slow]
Kubernetes e2e suite [sig-network] LoadBalancers should be able to create an internal type load balancer [Slow]
Kubernetes e2e suite [sig-network] LoadBalancers should be able to switch session affinity for LoadBalancer service with ESIPP off [Slow] [DisabledForLargeClusters] [LinuxOnly]
Kubernetes e2e suite [sig-network] LoadBalancers should be able to switch session affinity for LoadBalancer service with ESIPP on [Slow] [DisabledForLargeClusters] [LinuxOnly]
Kubernetes e2e suite [sig-network] LoadBalancers should handle load balancer cleanup finalizer for service [Slow]
Kubernetes e2e suite [sig-network] LoadBalancers should have session affinity work for LoadBalancer service with ESIPP off [Slow] [DisabledForLargeClusters] [LinuxOnly]
Kubernetes e2e suite [sig-network] LoadBalancers should have session affinity work for LoadBalancer service with ESIPP on [Slow] [DisabledForLargeClusters] [LinuxOnly]
Kubernetes e2e suite [sig-network] LoadBalancers should only allow access from service loadbalancer source ranges [Slow]
Kubernetes e2e suite [sig-network] LoadBalancers should reconcile LB health check interval [Slow][Serial][Disruptive]
Kubernetes e2e suite [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:Ingress] should conform to Ingress spec
Kubernetes e2e suite [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:NEG] [Flaky] rolling update backend pods should not cause service disruption
Kubernetes e2e suite [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:NEG] [Flaky] should be able to create a ClusterIP service
Kubernetes e2e suite [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:NEG] [Flaky] should be able to switch between IG and NEG modes
Kubernetes e2e suite [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:NEG] [Flaky] should conform to Ingress spec
Kubernetes e2e suite [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:NEG] [Flaky] should create NEGs for all ports with the Ingress annotation, and NEGs for the standalone annotation otherwise
Kubernetes e2e suite [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:NEG] [Flaky] should sync endpoints for both Ingress-referenced NEG and standalone NEG
Kubernetes e2e suite [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:NEG] [Flaky] should sync endpoints to NEG
Kubernetes e2e suite [sig-network] Loadbalancing: L7 Scalability GCE [Slow] [Serial] [Feature:IngressScale] Creating and updating ingresses should happen promptly with small/medium/large amount of ingresses
Kubernetes e2e suite [sig-network] Netpol API should support creating NetworkPolicy API operations
Kubernetes e2e suite [sig-network] Netpol API should support creating NetworkPolicy API with endport field [Feature:NetworkPolicyEndPort]
Kubernetes e2e suite [sig-network] Netpol API should support creating NetworkPolicy with Status subresource [Feature:NetworkPolicyStatus]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should allow egress access on one named port [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should allow egress access to server in CIDR block [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should allow ingress access from namespace on one named port [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should allow ingress access from updated namespace [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should allow ingress access from updated pod [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should allow ingress access on one named port [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should deny egress from all pods in a namespace [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should deny egress from pods based on PodSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should deny ingress access to updated pod [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should deny ingress from pods on other namespaces [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should enforce egress policy allowing traffic to a server in a different namespace based on PodSelector and NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should enforce except clause while egress access to server in CIDR block [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should enforce ingress policy allowing any port traffic to a server on a specific protocol [Feature:NetworkPolicy] [Feature:UDP]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should enforce multiple egress policies with egress allow-all policy taking precedence [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should enforce multiple ingress policies with ingress allow-all policy taking precedence [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should enforce multiple, stacked policies with overlapping podSelectors [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should enforce policies to check ingress and egress policies can be controlled independently based on PodSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should enforce policy based on Multiple PodSelectors and NamespaceSelectors [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should enforce policy based on NamespaceSelector with MatchExpressions using default ns label [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should enforce policy based on NamespaceSelector with MatchExpressions[Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should enforce policy based on PodSelector and NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should enforce policy based on PodSelector or NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should enforce policy based on PodSelector with MatchExpressions[Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should enforce policy based on Ports [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should enforce policy based on any PodSelectors [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should enforce policy to allow ingress traffic for a target [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should enforce policy to allow ingress traffic from pods in all namespaces [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should enforce policy to allow traffic based on NamespaceSelector with MatchLabels using default ns label [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should enforce policy to allow traffic from pods within server namespace based on PodSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should enforce policy to allow traffic only from a different namespace, based on NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should enforce policy to allow traffic only from a pod in a different namespace based on PodSelector and NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should enforce updated policy [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should ensure an IP overlapping both IPBlock.CIDR and IPBlock.Except is allowed [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should not allow access by TCP when a policy specifies only UDP [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should not mistakenly treat 'protocol: SCTP' as 'protocol: TCP', even if the plugin doesn't support SCTP [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should properly isolate pods that are selected by a policy allowing SCTP, even if the plugin doesn't support SCTP [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should stop enforcing policies after they are deleted [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should support a 'default-deny-all' policy [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should support a 'default-deny-ingress' policy [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should support allow-all policy [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should support denying of egress traffic on the client side (even if the server explicitly allows this traffic) [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should work with Ingress, Egress specified together [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol [Feature:SCTPConnectivity][LinuxOnly][Disruptive] NetworkPolicy between server and client using SCTP should enforce policy based on Ports [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol [Feature:SCTPConnectivity][LinuxOnly][Disruptive] NetworkPolicy between server and client using SCTP should enforce policy to allow traffic only from a pod in a different namespace based on PodSelector and NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol [Feature:SCTPConnectivity][LinuxOnly][Disruptive] NetworkPolicy between server and client using SCTP should support a 'default-deny-ingress' policy [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol [LinuxOnly] NetworkPolicy between server and client using UDP should enforce policy based on Ports [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol [LinuxOnly] NetworkPolicy between server and client using UDP should enforce policy to allow traffic only from a pod in a different namespace based on PodSelector and NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol [LinuxOnly] NetworkPolicy between server and client using UDP should support a 'default-deny-ingress' policy [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicy API should support creating NetworkPolicy API operations
Kubernetes e2e suite [sig-network] NetworkPolicy [Feature:SCTPConnectivity][LinuxOnly][Disruptive] NetworkPolicy between server and client using SCTP should enforce policy based on Ports [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicy [Feature:SCTPConnectivity][LinuxOnly][Disruptive] NetworkPolicy between server and client using SCTP should enforce policy to allow traffic only from a pod in a different namespace based on PodSelector and NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicy [Feature:SCTPConnectivity][LinuxOnly][Disruptive] NetworkPolicy between server and client using SCTP should support a 'default-deny' policy [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should allow egress access on one named port [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should allow egress access to server in CIDR block [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should allow ingress access from namespace on one named port [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should allow ingress access from updated namespace [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should allow ingress access from updated pod [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should allow ingress access on one named port [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should deny ingress access to updated pod [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce egress policy allowing traffic to a server in a different namespace based on PodSelector and NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce except clause while egress access to server in CIDR block [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce multiple egress policies with egress allow-all policy taking precedence [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce multiple ingress policies with ingress allow-all policy taking precedence [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce multiple, stacked policies with overlapping podSelectors [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policies to check ingress and egress policies can be controlled independently based on PodSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy based on NamespaceSelector with MatchExpressions[Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy based on PodSelector and NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy based on PodSelector or NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy based on PodSelector with MatchExpressions[Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy based on Ports [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy to allow traffic from pods within server namespace based on PodSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy to allow traffic only from a different namespace, based on NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy to allow traffic only from a pod in a different namespace based on PodSelector and NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce updated policy [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should ensure an IP overlapping both IPBlock.CIDR and IPBlock.Except is allowed [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should not allow access by TCP when a policy specifies only SCTP [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should stop enforcing policies after they are deleted [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should support a 'default-deny-all' policy [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should support a 'default-deny-ingress' policy [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should support allow-all policy [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should work with Ingress,Egress specified together [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: sctp [LinuxOnly][Feature:SCTPConnectivity][Disruptive]
Kubernetes e2e suite [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-network] Networking Granular Checks: Pods should function for node-pod communication: sctp [LinuxOnly][Feature:SCTPConnectivity][Disruptive]
Kubernetes e2e suite [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-network] Networking Granular Checks: Services should be able to handle large requests: http
Kubernetes e2e suite [sig-network] Networking Granular Checks: Services should be able to handle large requests: udp
Kubernetes e2e suite [sig-network] Networking Granular Checks: Services should function for client IP based session affinity: http [LinuxOnly]
Kubernetes e2e suite [sig-network] Networking Granular Checks: Services should function for client IP based session affinity: udp [LinuxOnly]
Kubernetes e2e suite [sig-network] Networking Granular Checks: Services should function for endpoint-Service: http
Kubernetes e2e suite [sig-network] Networking Granular Checks: Services should function for endpoint-Service: sctp [Feature:SCTPConnectivity][Disruptive]
Kubernetes e2e suite [sig-network] Networking Granular Checks: Services should function for endpoint-Service: udp
Kubernetes e2e suite [sig-network] Networking Granular Checks: Services should function for multiple endpoint-Services with same selector
Kubernetes e2e suite [sig-network] Networking Granular Checks: Services should function for node-Service: http
Kubernetes e2e suite [sig-network] Networking Granular Checks: Services should function for node-Service: sctp [Feature:SCTPConnectivity][Disruptive]
Kubernetes e2e suite [sig-network] Networking Granular Checks: Services should function for node-Service: udp
Kubernetes e2e suite [sig-network] Networking Granular Checks: Services should function for pod-Service: http
Kubernetes e2e suite [sig-network] Networking Granular Checks: Services should function for pod-Service: sctp [Feature:SCTPConnectivity][Disruptive]
Kubernetes e2e suite [sig-network] Networking Granular Checks: Services should function for pod-Service: udp
Kubernetes e2e suite [sig-network] Networking Granular Checks: Services should function for service endpoints using hostNetwork
Kubernetes e2e suite [sig-network] Networking Granular Checks: Services should support basic nodePort: udp functionality
Kubernetes e2e suite [sig-network] Networking Granular Checks: Services should update endpoints: http
Kubernetes e2e suite [sig-network] Networking Granular Checks: Services should update endpoints: udp
Kubernetes e2e suite [sig-network] Networking Granular Checks: Services should update nodePort: http [Slow]
Kubernetes e2e suite [sig-network] Networking Granular Checks: Services should update nodePort: udp [Slow]
Kubernetes e2e suite [sig-network] Networking IPerf2 [Feature:Networking-Performance] should run iperf2
Kubernetes e2e suite [sig-network] Networking should check kube-proxy urls
Kubernetes e2e suite [sig-network] Networking should provide Internet connection for containers [Feature:Networking-IPv4]
Kubernetes e2e suite [sig-network] Networking should provide Internet connection for containers [Feature:Networking-IPv6][Experimental][LinuxOnly]
Kubernetes e2e suite [sig-network] Networking should provide unchanging, static URL paths for kubernetes api services
Kubernetes e2e suite [sig-network] Networking should provider Internet connection for containers using DNS [Feature:Networking-DNS]
Kubernetes e2e suite [sig-network] Networking should recreate its iptables rules if they are deleted [Disruptive]
Kubernetes e2e suite [sig-network] NoSNAT [Feature:NoSNAT] [Slow] Should be able to send traffic between Pods without SNAT
Kubernetes e2e suite [sig-network] Proxy version v1 A set of valid responses are returned for both pod and service Proxy [Conformance]
Kubernetes e2e suite [sig-network] Proxy version v1 A set of valid responses are returned for both pod and service ProxyWithPath [Conformance]
Kubernetes e2e suite [sig-network] Proxy version v1 should proxy logs on node using proxy subresource
Kubernetes e2e suite [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource
Kubernetes e2e suite [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance]
Kubernetes e2e suite [sig-network] SCTP [LinuxOnly] should allow creating a basic SCTP service with pod and endpoints
Kubernetes e2e suite [sig-network] SCTP [LinuxOnly] should create a ClusterIP Service with SCTP ports
Kubernetes e2e suite [sig-network] SCTP [LinuxOnly] should create a Pod with SCTP HostPort
Kubernetes e2e suite [sig-network] Service endpoints latency should not be very high [Conformance]
Kubernetes e2e suite [sig-network] Services GCE [Slow] should be able to create and tear down a standard-tier load balancer [Slow]
Kubernetes e2e suite [sig-network] Services should allow pods to hairpin back to themselves through services
Kubernetes e2e suite [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]
Kubernetes e2e suite [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]
Kubernetes e2e suite [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]
Kubernetes e2e suite [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]
Kubernetes e2e suite [sig-network] Services should be able to connect to terminating and unready endpoints if PublishNotReadyAddresses is true
Kubernetes e2e suite [sig-network] Services should be able to create a functioning NodePort service [Conformance]
Kubernetes e2e suite [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]
Kubernetes e2e suite [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]
Kubernetes e2e suite [sig-network] Services should be able to up and down services
Kubernetes e2e suite [sig-network] Services should be able to update service type to NodePort listening on same port number but different protocols
Kubernetes e2e suite [sig-network] Services should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node
Kubernetes e2e suite [sig-network] Services should be rejected for evicted pods (no endpoints exist)
Kubernetes e2e suite [sig-network] Services should be rejected when no endpoints exist
Kubernetes e2e suite [sig-network] Services should check NodePort out-of-range
Kubernetes e2e suite [sig-network] Services should complete a service status lifecycle [Conformance]
Kubernetes e2e suite [sig-network] Services should create endpoints for unready pods
Kubernetes e2e suite [sig-network] Services should delete a collection of services [Conformance]
Kubernetes e2e suite [sig-network] Services should fail health check node port if there are only terminating endpoints [Feature:ProxyTerminatingEndpoints]
Kubernetes e2e suite [sig-network] Services should fallback to local terminating endpoints when there are no ready endpoints with externalTrafficPolicy=Local [Feature:ProxyTerminatingEndpoints]
Kubernetes e2e suite [sig-network] Services should fallback to local terminating endpoints when there are no ready endpoints with internalTrafficPolicy=Local [Feature:ProxyTerminatingEndpoints]
Kubernetes e2e suite [sig-network] Services should fallback to terminating endpoints when there are no ready endpoints with externallTrafficPolicy=Cluster [Feature:ProxyTerminatingEndpoints]
Kubernetes e2e suite [sig-network] Services should fallback to terminating endpoints when there are no ready endpoints with internalTrafficPolicy=Cluster [Feature:ProxyTerminatingEndpoints]
Kubernetes e2e suite [sig-network] Services should find a service from listing all namespaces [Conformance]
Kubernetes e2e suite [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]
Kubernetes e2e suite [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]
Kubernetes e2e suite [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]
Kubernetes e2e suite [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]
Kubernetes e2e suite [sig-network] Services should implement service.kubernetes.io/headless
Kubernetes e2e suite [sig-network] Services should implement service.kubernetes.io/service-proxy-name
Kubernetes e2e suite [sig-network] Services should not be able to connect to terminating and unready endpoints if PublishNotReadyAddresses is false
Kubernetes e2e suite [sig-network] Services should preserve source pod IP for traffic thru service cluster IP [LinuxOnly]
Kubernetes e2e suite [sig-network] Services should prevent NodePort collisions
Kubernetes e2e suite [sig-network] Services should provide secure master service [Conformance]
Kubernetes e2e suite [sig-network] Services should release NodePorts on delete
Kubernetes e2e suite [sig-network] Services should respect internalTrafficPolicy=Local Pod (hostNetwork: true) to Pod [Feature:ServiceInternalTrafficPolicy]
Kubernetes e2e suite [sig-network] Services should respect internalTrafficPolicy=Local Pod and Node, to Pod (hostNetwork: true) [Feature:ServiceInternalTrafficPolicy]
Kubernetes e2e suite [sig-network] Services should respect internalTrafficPolicy=Local Pod to Pod [Feature:ServiceInternalTrafficPolicy]
Kubernetes e2e suite [sig-network] Services should serve a basic endpoint from pods [Conformance]
Kubernetes e2e suite [sig-network] Services should serve multiport endpoints from pods [Conformance]
Kubernetes e2e suite [sig-network] Services should test the lifecycle of an Endpoint [Conformance]
Kubernetes e2e suite [sig-network] Services should work after restarting apiserver [Disruptive]
Kubernetes e2e suite [sig-network] Services should work after restarting kube-proxy [Disruptive]
Kubernetes e2e suite [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should be able to handle large requests: http
Kubernetes e2e suite [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should be able to handle large requests: udp
Kubernetes e2e suite [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should function for client IP based session affinity: http [LinuxOnly]
Kubernetes e2e suite [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should function for client IP based session affinity: udp [LinuxOnly]
Kubernetes e2e suite [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should function for endpoint-Service: http
Kubernetes e2e suite [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should function for endpoint-Service: udp
Kubernetes e2e suite [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should function for node-Service: http
Kubernetes e2e suite [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should function for node-Service: udp
Kubernetes e2e suite [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should function for pod-Service: http
Kubernetes e2e suite [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should function for pod-Service: sctp [Feature:SCTPConnectivity][Disruptive]
Kubernetes e2e suite [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should function for pod-Service: udp
Kubernetes e2e suite [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should function for service endpoints using hostNetwork
Kubernetes e2e suite [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should update endpoints: http
Kubernetes e2e suite [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should update endpoints: udp
Kubernetes e2e suite [sig-network] [Feature:IPv6DualStack] should be able to reach pod on ipv4 and ipv6 ip
Kubernetes e2e suite [sig-network] [Feature:IPv6DualStack] should create a single stack service with cluster ip from primary service range
Kubernetes e2e suite [sig-network] [Feature:IPv6DualStack] should create pod, add ipv6 and ipv4 ip to pod ips
Kubernetes e2e suite [sig-network] [Feature:IPv6DualStack] should create service with ipv4 cluster ip
Kubernetes e2e suite [sig-network] [Feature:IPv6DualStack] should create service with ipv4,v6 cluster ip
Kubernetes e2e suite [sig-network] [Feature:IPv6DualStack] should create service with ipv6 cluster ip
Kubernetes e2e suite [sig-network] [Feature:IPv6DualStack] should create service with ipv6,v4 cluster ip
Kubernetes e2e suite [sig-network] [Feature:IPv6DualStack] should have ipv4 and ipv6 internal node ip
Kubernetes e2e suite [sig-network] [Feature:PerformanceDNS][Serial] Should answer DNS query for maximum number of services per cluster
Kubernetes e2e suite [sig-network] [Feature:Topology Hints] should distribute endpoints evenly
Kubernetes e2e suite [sig-network] kube-proxy migration [Feature:KubeProxyDaemonSetMigration] Downgrade kube-proxy from a DaemonSet to static pods should maintain a functioning cluster [Feature:KubeProxyDaemonSetDowngrade]
Kubernetes e2e suite [sig-network] kube-proxy migration [Feature:KubeProxyDaemonSetMigration] Upgrade kube-proxy from static pods to a DaemonSet should maintain a functioning cluster [Feature:KubeProxyDaemonSetUpgrade]
Kubernetes e2e suite [sig-node] AppArmor load AppArmor profiles can disable an AppArmor profile, using unconfined
Kubernetes e2e suite [sig-node] AppArmor load AppArmor profiles should enforce an AppArmor profile
Kubernetes e2e suite [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]
Kubernetes e2e suite [sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]
Kubernetes e2e suite [sig-node] ConfigMap should update ConfigMap successfully
Kubernetes e2e suite [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Container Runtime blackbox test on terminated container should report termination message as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Container Runtime blackbox test on terminated container should report termination message from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Container Runtime blackbox test on terminated container should report termination message from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Container Runtime blackbox test on terminated container should report termination message if TerminationMessagePath is set [NodeConformance]
Kubernetes e2e suite [sig-node] Container Runtime blackbox test on terminated container should report termination message if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Container Runtime blackbox test when running a container with a new image should be able to pull from private registry with secret [NodeConformance]
Kubernetes e2e suite [sig-node] Container Runtime blackbox test when running a container with a new image should be able to pull image [NodeConformance]
Kubernetes e2e suite [sig-node] Container Runtime blackbox test when running a container with a new image should not be able to pull from private registry without secret [NodeConformance]
Kubernetes e2e suite [sig-node] Container Runtime blackbox test when running a container with a new image should not be able to pull image from invalid registry [NodeConformance]
Kubernetes e2e suite [sig-node] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Containers should be able to override the image's default arguments (container cmd) [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Containers should be able to override the image's default command (container entrypoint) [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Downward API [Serial] [Disruptive] [NodeFeature:DownwardAPIHugePages] Downward API tests for hugepages should provide container's limits.hugepages-<pagesize> and requests.hugepages-<pagesize> as env vars
Kubernetes e2e suite [sig-node] Downward API [Serial] [Disruptive] [NodeFeature:DownwardAPIHugePages] Downward API tests for hugepages should provide default limits.hugepages-<pagesize> from node allocatable
Kubernetes e2e suite [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Downward API should provide host IP and pod IP as an env var if pod uses host network [LinuxOnly]
Kubernetes e2e suite [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Ephemeral Containers [NodeFeature:EphemeralContainers] will start an ephemeral container in an existing pod
Kubernetes e2e suite [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running
Kubernetes e2e suite [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]
Kubernetes e2e suite [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]
Kubernetes e2e suite [sig-node] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
Kubernetes e2e suite [sig-node] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
Kubernetes e2e suite [sig-node] Kubelet [Serial] [Slow] experimental resource usage tracking [Feature:ExperimentalResourceUsageTracking] resource tracking for 100 pods per node
Kubernetes e2e suite [sig-node] Kubelet [Serial] [Slow] regular resource usage tracking [Feature:RegularResourceUsageTracking] resource tracking for 0 pods per node
Kubernetes e2e suite [sig-node] Kubelet [Serial] [Slow] regular resource usage tracking [Feature:RegularResourceUsageTracking] resource tracking for 100 pods per node
Kubernetes e2e suite [sig-node] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Lease lease API should be available [Conformance]
Kubernetes e2e suite [sig-node] Mount propagation should propagate mounts within defined scopes
Kubernetes e2e suite [sig-node] NoExecuteTaintManager Multiple Pods [Serial] only evicts pods without tolerations from tainted nodes
Kubernetes e2e suite [sig-node] NoExecuteTaintManager Single Pod [Serial] doesn't evict pod with tolerations from tainted nodes
Kubernetes e2e suite [sig-node] NoExecuteTaintManager Single Pod [Serial] eventually evict pod with finite tolerations from tainted nodes
Kubernetes e2e suite [sig-node] NoExecuteTaintManager Single Pod [Serial] evicts pods from tainted nodes
Kubernetes e2e suite [sig-node] NodeLease NodeLease should have OwnerReferences set
Kubernetes e2e suite [sig-node] NodeLease NodeLease the kubelet should create and update a lease in the kube-node-lease namespace
Kubernetes e2e suite [sig-node] NodeLease NodeLease the kubelet should report node status infrequently
Kubernetes e2e suite [sig-node] NodeProblemDetector should run without error
Kubernetes e2e suite [sig-node] Pod garbage collector [Feature:PodGarbageCollector] [Slow] should handle the creation of 1000 pods
Kubernetes e2e suite [sig-node] PodOSRejection [NodeConformance] Kubelet should reject pod when the node OS doesn't match pod's OS
Kubernetes e2e suite [sig-node] PodTemplates should delete a collection of pod templates [Conformance]
Kubernetes e2e suite [sig-node] PodTemplates should replace a pod template [Conformance]
Kubernetes e2e suite [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance]
Kubernetes e2e suite [sig-node] Pods Extended Delete Grace Period should be submitted and removed
Kubernetes e2e suite [sig-node] Pods Extended Pod Container Status should never report container start when an init container fails
Kubernetes e2e suite [sig-node] Pods Extended Pod Container Status should never report success for a pending container
Kubernetes e2e suite [sig-node] Pods Extended Pod Container lifecycle evicted pods should be terminal
Kubernetes e2e suite [sig-node] Pods Extended Pod Container lifecycle should not create extra sandbox if all containers are done
Kubernetes e2e suite [sig-node] Pods Extended Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]
Kubernetes e2e suite [sig-node] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Pods should be submitted and removed [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Pods should be updated [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Pods should contain environment variables for services [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Pods should delete a collection of pods [Conformance]
Kubernetes e2e suite [sig-node] Pods should get a host IP [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Pods should run through the lifecycle of Pods and PodStatus [Conformance]
Kubernetes e2e suite [sig-node] Pods should support pod readiness gates [NodeConformance]
Kubernetes e2e suite [sig-node] Pods should support remote command execution over websockets [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] PreStop graceful pod terminated should wait until preStop hook completes the process
Kubernetes e2e suite [sig-node] PreStop should call prestop when killing a pod [Conformance]
Kubernetes e2e suite [sig-node] PrivilegedPod [NodeConformance] should enable privileged commands [LinuxOnly]
Kubernetes e2e suite [sig-node] Probing container should *not* be restarted by liveness probe because startup probe delays it
Kubernetes e2e suite [sig-node] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Probing container should *not* be restarted with a GRPC liveness probe [NodeConformance]
Kubernetes e2e suite [sig-node] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Probing container should *not* be restarted with a non-local redirect http liveness probe
Kubernetes e2e suite [sig-node] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Probing container should be ready immediately after startupProbe succeeds
Kubernetes e2e suite [sig-node] Probing container should be restarted by liveness probe after startup probe enables it
Kubernetes e2e suite [sig-node] Probing container should be restarted startup probe fails
Kubernetes e2e suite [sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Probing container should be restarted with a GRPC liveness probe [NodeConformance]
Kubernetes e2e suite [sig-node] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Probing container should be restarted with a failing exec liveness probe that took longer than the timeout
Kubernetes e2e suite [sig-node] Probing container should be restarted with a local redirect http liveness probe
Kubernetes e2e suite [sig-node] Probing container should be restarted with an exec liveness probe with timeout [MinimumKubeletVersion:1.20] [NodeConformance]
Kubernetes e2e suite [sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Probing container should mark readiness on pods to false and disable liveness probes while pod is in progress of terminating
Kubernetes e2e suite [sig-node] Probing container should mark readiness on pods to false while pod is in progress of terminating when a pod has a readiness probe
Kubernetes e2e suite [sig-node] Probing container should not be ready with an exec readiness probe timeout [MinimumKubeletVersion:1.20] [NodeConformance]
Kubernetes e2e suite [sig-node] Probing container should override timeoutGracePeriodSeconds when LivenessProbe field is set [Feature:ProbeTerminationGracePeriod]
Kubernetes e2e suite [sig-node] Probing container should override timeoutGracePeriodSeconds when StartupProbe field is set [Feature:ProbeTerminationGracePeriod]
Kubernetes e2e suite [sig-node] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] RuntimeClass should support RuntimeClasses API operations [Conformance]
Kubernetes e2e suite [sig-node] RuntimeClass should reject a Pod requesting a RuntimeClass with an unconfigured handler [NodeFeature:RuntimeHandler]
Kubernetes e2e suite [sig-node] RuntimeClass should reject a Pod requesting a RuntimeClass with conflicting node selector
Kubernetes e2e suite [sig-node] RuntimeClass should reject a Pod requesting a deleted RuntimeClass [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] RuntimeClass should reject a Pod requesting a non-existent RuntimeClass [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] RuntimeClass should run a Pod requesting a RuntimeClass with a configured handler [NodeFeature:RuntimeHandler]
Kubernetes e2e suite [sig-node] RuntimeClass should run a Pod requesting a RuntimeClass with scheduling with taints [Serial]
Kubernetes e2e suite [sig-node] RuntimeClass should run a Pod requesting a RuntimeClass with scheduling without taints
Kubernetes e2e suite [sig-node] RuntimeClass should schedule a Pod requesting a RuntimeClass and initialize its Overhead [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] RuntimeClass should schedule a Pod requesting a RuntimeClass without PodOverhead [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] SSH should SSH to all nodes and run commands
Kubernetes e2e suite [sig-node] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Secrets should be consumable via the environment [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Secrets should fail to create secret due to empty secret key [Conformance]
Kubernetes e2e suite [sig-node] Secrets should patch a secret [Conformance]
Kubernetes e2e suite [sig-node] Security Context When creating a container with runAsNonRoot should not run with an explicit root user ID [LinuxOnly]
Kubernetes e2e suite [sig-node] Security Context When creating a container with runAsNonRoot should not run without a specified user ID
Kubernetes e2e suite [sig-node] Security Context When creating a container with runAsNonRoot should run with an explicit non-root user ID [LinuxOnly]
Kubernetes e2e suite [sig-node] Security Context When creating a container with runAsNonRoot should run with an image specified user ID
Kubernetes e2e suite [sig-node] Security Context When creating a container with runAsUser should run the container with uid 0 [LinuxOnly] [NodeConformance]
Kubernetes e2e suite [sig-node] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Security Context When creating a pod with privileged should run the container as privileged when true [LinuxOnly] [NodeFeature:HostAccess]
Kubernetes e2e suite [sig-node] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Security Context When creating a pod with readOnlyRootFilesystem should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance]
Kubernetes e2e suite [sig-node] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Security Context should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]
Kubernetes e2e suite [sig-node] Security Context should support container.SecurityContext.RunAsUser [LinuxOnly]
Kubernetes e2e suite [sig-node] Security Context should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]
Kubernetes e2e suite [sig-node] Security Context should support pod.Spec.SecurityContext.RunAsUser [LinuxOnly]
Kubernetes e2e suite [sig-node] Security Context should support pod.Spec.SecurityContext.SupplementalGroups [LinuxOnly]
Kubernetes e2e suite [sig-node] Security Context should support seccomp default which is unconfined [LinuxOnly]
Kubernetes e2e suite [sig-node] Security Context should support seccomp runtime/default [LinuxOnly]
Kubernetes e2e suite [sig-node] Security Context should support seccomp unconfined on the container [LinuxOnly]
Kubernetes e2e suite [sig-node] Security Context should support seccomp unconfined on the pod [LinuxOnly]
Kubernetes e2e suite [sig-node] Security Context should support volume SELinux relabeling [Flaky] [LinuxOnly]
Kubernetes e2e suite [sig-node] Security Context should support volume SELinux relabeling when using hostIPC [Flaky] [LinuxOnly]
Kubernetes e2e suite [sig-node] Security Context should support volume SELinux relabeling when using hostPID [Flaky] [LinuxOnly]
Kubernetes e2e suite [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance]
Kubernetes e2e suite [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when true [LinuxOnly] [NodeConformance]
Kubernetes e2e suite [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Sysctls [LinuxOnly] [NodeConformance] should not launch unsafe, but not explicitly enabled sysctls on the node [MinimumKubeletVersion:1.21]
Kubernetes e2e suite [sig-node] Sysctls [LinuxOnly] [NodeConformance] should reject invalid sysctls [MinimumKubeletVersion:1.21] [Conformance]
Kubernetes e2e suite [sig-node] Sysctls [LinuxOnly] [NodeConformance] should support sysctls [MinimumKubeletVersion:1.21] [Conformance]
Kubernetes e2e suite [sig-node] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Variable Expansion should allow substituting values in a volume subpath [Conformance]
Kubernetes e2e suite [sig-node] [Feature:Example] Downward API should create a pod that prints his name and namespace
Kubernetes e2e suite [sig-node] [Feature:Example] Liveness liveness pods should be automatically restarted
Kubernetes e2e suite [sig-node] [Feature:Example] Secret should create a pod that reads a secret
Kubernetes e2e suite [sig-node] crictl should be able to run crictl on the node
Kubernetes e2e suite [sig-node] gpu Upgrade [Feature:GPUUpgrade] cluster downgrade should be able to run gpu pod after downgrade [Feature:GPUClusterDowngrade]
Kubernetes e2e suite [sig-node] gpu Upgrade [Feature:GPUUpgrade] cluster upgrade should be able to run gpu pod after upgrade [Feature:GPUClusterUpgrade]
Kubernetes e2e suite [sig-node] gpu Upgrade [Feature:GPUUpgrade] master upgrade should NOT disrupt gpu pod [Feature:GPUMasterUpgrade]
Kubernetes e2e suite [sig-node] kubelet Clean up pods on node kubelet should be able to delete 10 pods per node in 1m0s.
Kubernetes e2e suite [sig-node] kubelet host cleanup with volume mounts [HostCleanup][Flaky] Host cleanup after disrupting NFS volume [NFS] after stopping the nfs-server and deleting the (active) client pod, the NFS mount and the pod's UID directory should be removed.
Kubernetes e2e suite [sig-node] kubelet host cleanup with volume mounts [HostCleanup][Flaky] Host cleanup after disrupting NFS volume [NFS] after stopping the nfs-server and deleting the (sleeping) client pod, the NFS mount and the pod's UID directory should be removed.
Kubernetes e2e suite [sig-scheduling] GPUDevicePluginAcrossRecreate [Feature:Recreate] run Nvidia GPU Device Plugin tests with a recreation
Kubernetes e2e suite [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]
Kubernetes e2e suite [sig-scheduling] Multi-AZ Clusters should spread the pods of a replication controller across zones [Serial]
Kubernetes e2e suite [sig-scheduling] Multi-AZ Clusters should spread the pods of a service across zones [Serial]
Kubernetes e2e suite [sig-scheduling] SchedulerPredicates [Serial] PodTopologySpread Filtering validates 4 pods with MaxSkew=1 are evenly distributed into 2 nodes
Kubernetes e2e suite [sig-scheduling] SchedulerPredicates [Serial] validates local ephemeral storage resource limits of pods that are allowed to run [Feature:LocalStorageCapacityIsolation]
Kubernetes e2e suite [sig-scheduling] SchedulerPredicates [Serial] validates pod overhead is considered along with resource limits of pods that are allowed to run verify pod overhead is accounted for
Kubernetes e2e suite [sig-scheduling] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching
Kubernetes e2e suite [sig-scheduling] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching
Kubernetes e2e suite [sig-scheduling] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching
Kubernetes e2e suite [sig-scheduling] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching
Kubernetes e2e suite [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol
Kubernetes e2e suite [sig-scheduling] SchedulerPriorities [Serial] Pod should be preferably scheduled to nodes pod can tolerate
Kubernetes e2e suite [sig-scheduling] SchedulerPriorities [Serial] Pod should be scheduled to node that don't match the PodAntiAffinity terms
Kubernetes e2e suite [sig-scheduling] SchedulerPriorities [Serial] PodTopologySpread Scoring validates pod should be preferably scheduled to node which makes the matching pods more evenly distributed
Kubernetes e2e suite [sig-scheduling] [Feature:GPUDevicePlugin] run Nvidia GPU Device Plugin tests
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should create read-only inline ephemeral volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should create read/write inline ephemeral volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should support multiple inline ephemeral volumes
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should support two pods which have the same volume definition
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] provisioning should mount multiple PV pointing to the same storage on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with any volume data source [Serial]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with mount options
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with pvc data source
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with pvc data source in parallel [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] volume-stress multiple pods should access different volumes repeatedly [Slow] [Serial]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] volumes should store data
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] capacity provides storage capacity information
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with different fsgroup applied to the volume contents
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with same fsgroup applied to the volume contents
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with different fsgroup applied to the volume contents
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with same fsgroup skips ownership changes to the volume contents
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] provisioning should mount multiple PV pointing to the same storage on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with any volume data source [Serial]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with mount options
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source in parallel [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] volume-stress multiple pods should access different volumes repeatedly [Slow] [Serial]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (delayed binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (delayed binding)] topology should provision a volume and schedule a pod with AllowedTopologies
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ext3)] volumes should store data
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ext4)] volumes should store data
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] volume-lifecycle-performance should provision volumes at scale within performance constraints [Slow] [Serial]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits should support volume limits [Serial]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits should verify that all csinodes have volume limits
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (immediate binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (immediate binding)] topology should provision a volume and schedule a pod with AllowedTopologies
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should mount multiple PV pointing to the same storage on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with any volume data source [Serial]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with mount options
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with pvc data source
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with pvc data source in parallel [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support existing directory
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes should store data
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (xfs)][Slow] volumes should store data
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic Snapshot (delete policy)] snapshottable-stress[Feature:VolumeSnapshotDataSource] should support snapshotting of many volumes repeatedly [Slow] [Serial]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works, check deletion (ephemeral)
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic Snapshot (retain policy)] snapshottable-stress[Feature:VolumeSnapshotDataSource] should support snapshotting of many volumes repeatedly [Slow] [Serial]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works, check deletion (ephemeral)
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Ephemeral Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Ephemeral Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works, check deletion (ephemeral)
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Ephemeral Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Ephemeral Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works, check deletion (ephemeral)
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should create read-only inline ephemeral volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should create read/write inline ephemeral volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should support multiple inline ephemeral volumes
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should support two pods which have the same volume definition
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should create read-only inline ephemeral volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should create read/write inline ephemeral volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should support multiple inline ephemeral volumes
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should support two pods which have the same volume definition
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should create read-only inline ephemeral volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should create read/write inline ephemeral volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support multiple inline ephemeral volumes
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support two pods which have the same volume definition
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs)] volumeLimits should support volume limits [Serial]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs)] volumeLimits should verify that all csinodes have volume limits
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should support existing directory
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (ext3)] volumes should store data
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (ext4)] volumes should store data
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes should store data
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (xfs)][Slow] volumes should store data
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] volumes should store data
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (ext3)] volumes should store data
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (ext4)] volumes should store data
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes should store data
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (xfs)][Slow] volumes should store data
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works, check deletion (ephemeral)
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works, check deletion (ephemeral)
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should create read-only inline ephemeral volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should create read/write inline ephemeral volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should support multiple inline ephemeral volumes
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should support two pods which have the same volume definition
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] provisioning should mount multiple PV pointing to the same storage on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with any volume data source [Serial]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with mount options
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with pvc data source
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with pvc data source in parallel [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] volume-stress multiple pods should access different volumes repeatedly [Slow] [Serial]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] volumes should store data
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] capacity provides storage capacity information
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with different fsgroup applied to the volume contents
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with same fsgroup applied to the volume contents
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with different fsgroup applied to the volume contents
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with same fsgroup skips ownership changes to the volume contents
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] provisioning should mount multiple PV pointing to the same storage on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with any volume data source [Serial]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with mount options
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source in parallel [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] volume-stress multiple pods should access different volumes repeatedly [Slow] [Serial]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (delayed binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (delayed binding)] topology should provision a volume and schedule a pod with AllowedTopologies
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ext3)] volumes should store data
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ext4)] volumes should store data
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (filesystem volmode)] volume-lifecycle-performance should provision volumes at scale within performance constraints [Slow] [Serial]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits should support volume limits [Serial]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits should verify that all csinodes have volume limits
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (immediate binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (immediate binding)] topology should provision a volume and schedule a pod with AllowedTopologies
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should mount multiple PV pointing to the same storage on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with any volume data source [Serial]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with mount options
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with pvc data source
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with pvc data source in parallel [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support existing directory
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes should store data
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (xfs)][Slow] volumes should store data
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic Snapshot (delete policy)] snapshottable-stress[Feature:VolumeSnapshotDataSource] should support snapshotting of many volumes repeatedly [Slow] [Serial]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works, check deletion (ephemeral)
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic Snapshot (retain policy)] snapshottable-stress[Feature:VolumeSnapshotDataSource] should support snapshotting of many volumes repeatedly [Slow] [Serial]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works, check deletion (ephemeral)
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Ephemeral Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Ephemeral Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works, check deletion (ephemeral)
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Ephemeral Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Ephemeral Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works, check deletion (ephemeral)
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should create read-only inline ephemeral volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should create read/write inline ephemeral volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should support multiple inline ephemeral volumes
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should support two pods which have the same volume definition
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should create read-only inline ephemeral volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should create read/write inline ephemeral volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should support multiple inline ephemeral volumes
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should support two pods which have the same volume definition
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should create read-only inline ephemeral volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should create read/write inline ephemeral volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support multiple inline ephemeral volumes
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support two pods which have the same volume definition
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Generic Ephemeral-volume (default fs)] volumeLimits should support volume limits [Serial]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Generic Ephemeral-volume (default fs)] volumeLimits should verify that all csinodes have volume limits
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] subPath should support existing directory
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (ext3)] volumes should store data
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (ext4)] volumes should store data
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes should store data
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (xfs)][Slow] volumes should store data
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (block volmode)] volumes should store data
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (ext3)] volumes should store data
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (ext4)] volumes should store data
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes should store data
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (xfs)][Slow] volumes should store data
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works, check deletion (ephemeral)
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works, check deletion (ephemeral)
Kubernetes e2e suite [sig-storage] CSI mock volume CSI CSIDriver deployment after pod creation using non-attachable mock driver should bringup pod after deploying CSIDriver attach=false [Slow]
Kubernetes e2e suite [sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should modify fsGroup if fsGroupPolicy=File
Kubernetes e2e suite [sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should modify fsGroup if fsGroupPolicy=default
Kubernetes e2e suite [sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should not modify fsGroup if fsGroupPolicy=None
Kubernetes e2e suite [sig-storage] CSI mock volume CSI NodeStage error cases [Slow] should call NodeUnstage after NodeStage ephemeral error
Kubernetes e2e suite [sig-storage] CSI mock volume CSI NodeStage error cases [Slow] should call NodeUnstage after NodeStage success
Kubernetes e2e suite [sig-storage] CSI mock volume CSI NodeStage error cases [Slow] should not call NodeUnstage after NodeStage final error
Kubernetes e2e suite [sig-storage] CSI mock volume CSI NodeStage error cases [Slow] should retry NodeStage after NodeStage ephemeral error
Kubernetes e2e suite [sig-storage] CSI mock volume CSI NodeStage error cases [Slow] should retry NodeStage after NodeStage final error
Kubernetes e2e suite [sig-storage] CSI mock volume CSI NodeUnstage error cases [Slow] should call NodeStage after NodeUnstage success
Kubernetes e2e suite [sig-storage] CSI mock volume CSI NodeUnstage error cases [Slow] two pods: should call NodeStage after previous NodeUnstage final error
Kubernetes e2e suite [sig-storage] CSI mock volume CSI NodeUnstage error cases [Slow] two pods: should call NodeStage after previous NodeUnstage transient error
Kubernetes e2e suite [sig-storage] CSI mock volume CSI Snapshot Controller metrics [Feature:VolumeSnapshotDataSource] snapshot controller should emit dynamic CreateSnapshot, CreateSnapshotAndReady, and DeleteSnapshot metrics
Kubernetes e2e suite [sig-storage] CSI mock volume CSI Snapshot Controller metrics [Feature:VolumeSnapshotDataSource] snapshot controller should emit pre-provisioned CreateSnapshot, CreateSnapshotAndReady, and DeleteSnapshot metrics
Kubernetes e2e suite [sig-storage] CSI mock volume CSI Volume Snapshots [Feature:VolumeSnapshotDataSource] volumesnapshotcontent and pvc in Bound state with deletion timestamp set should not get deleted while snapshot finalizer exists
Kubernetes e2e suite [sig-storage] CSI mock volume CSI Volume Snapshots secrets [Feature:VolumeSnapshotDataSource] volume snapshot create/delete with secrets
Kubernetes e2e suite [sig-storage] CSI mock volume CSI Volume expansion should expand volume by restarting pod if attach=off, nodeExpansion=on
Kubernetes e2e suite [sig-storage] CSI mock volume CSI Volume expansion should expand volume by restarting pod if attach=on, nodeExpansion=on
Kubernetes e2e suite [sig-storage] CSI mock volume CSI Volume expansion should expand volume without restarting pod if nodeExpansion=off
Kubernetes e2e suite [sig-storage] CSI mock volume CSI Volume expansion should not expand volume if resizingOnDriver=off, resizingOnSC=on
Kubernetes e2e suite [sig-storage] CSI mock volume CSI attach test using mock driver should not require VolumeAttach for drivers without attachment
Kubernetes e2e suite [sig-storage] CSI mock volume CSI attach test using mock driver should preserve attachment policy when no CSIDriver present
Kubernetes e2e suite [sig-storage] CSI mock volume CSI attach test using mock driver should require VolumeAttach for drivers with attachment
Kubernetes e2e suite [sig-storage] CSI mock volume CSI attach test using mock driver should require VolumeAttach for ephemermal volume and drivers with attachment
Kubernetes e2e suite [sig-storage] CSI mock volume CSI online volume expansion should expand volume without restarting pod if attach=off, nodeExpansion=on
Kubernetes e2e suite [sig-storage] CSI mock volume CSI online volume expansion should expand volume without restarting pod if attach=on, nodeExpansion=on
Kubernetes e2e suite [sig-storage] CSI mock volume CSI volume limit information using mock driver should report attach limit for generic ephemeral volume when persistent volume is attached [Slow]
Kubernetes e2e suite [sig-storage] CSI mock volume CSI volume limit information using mock driver should report attach limit for persistent volume when generic ephemeral volume is attached [Slow]
Kubernetes e2e suite [sig-storage] CSI mock volume CSI volume limit information using mock driver should report attach limit when limit is bigger than 0 [Slow]
Kubernetes e2e suite [sig-storage] CSI mock volume CSI workload information using mock driver contain ephemeral=true when using inline volume
Kubernetes e2e suite [sig-storage] CSI mock volume CSI workload information using mock driver should be passed when podInfoOnMount=true
Kubernetes e2e suite [sig-storage] CSI mock volume CSI workload information using mock driver should not be passed when CSIDriver does not exist
Kubernetes e2e suite [sig-storage] CSI mock volume CSI workload information using mock driver should not be passed when podInfoOnMount=false
Kubernetes e2e suite [sig-storage] CSI mock volume CSI workload information using mock driver should not be passed when podInfoOnMount=nil
Kubernetes e2e suite [sig-storage] CSI mock volume CSIServiceAccountToken token should be plumbed down when csiServiceAccountTokenEnabled=true
Kubernetes e2e suite [sig-storage] CSI mock volume CSIServiceAccountToken token should not be plumbed down when CSIDriver is not deployed
Kubernetes e2e suite [sig-storage] CSI mock volume CSIServiceAccountToken token should not be plumbed down when csiServiceAccountTokenEnabled=false
Kubernetes e2e suite [sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity disabled
Kubernetes e2e suite [sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity unused
Kubernetes e2e suite [sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, have capacity
Kubernetes e2e suite [sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, insufficient capacity
Kubernetes e2e suite [sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, no capacity
Kubernetes e2e suite [sig-storage] CSI mock volume Delegate FSGroup to CSI driver [LinuxOnly] should not pass FSGroup to CSI driver if it is set in pod and driver supports VOLUME_MOUNT_GROUP
Kubernetes e2e suite [sig-storage] CSI mock volume Delegate FSGroup to CSI driver [LinuxOnly] should pass FSGroup to CSI driver if it is set in pod and driver supports VOLUME_MOUNT_GROUP
Kubernetes e2e suite [sig-storage] CSI mock volume storage capacity exhausted, immediate binding
Kubernetes e2e suite [sig-storage] CSI mock volume storage capacity exhausted, late binding, no topology
Kubernetes e2e suite [sig-storage] CSI mock volume storage capacity exhausted, late binding, with topology
Kubernetes e2e suite [sig-storage] CSI mock volume storage capacity unlimited
Kubernetes e2e suite [sig-storage] CSIStorageCapacity should support CSIStorageCapacities API operations [Conformance]
Kubernetes e2e suite [sig-storage] ConfigMap Should fail non-optional pod creation due to configMap object does not exist [Slow]
Kubernetes e2e suite [sig-storage] ConfigMap Should fail non-optional pod creation due to the key in the configMap object does not exist [Slow]
Kubernetes e2e suite [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] ConfigMap should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
Kubernetes e2e suite [sig-storage] ConfigMap should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup]
Kubernetes e2e suite [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
Kubernetes e2e suite [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]
Kubernetes e2e suite [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Downward API [Serial] [Disruptive] [NodeFeature:EphemeralStorage] Downward API tests for local ephemeral storage should provide container's limits.ephemeral-storage and requests.ephemeral-storage as env vars
Kubernetes e2e suite [sig-storage] Downward API [Serial] [Disruptive] [NodeFeature:EphemeralStorage] Downward API tests for local ephemeral storage should provide default limits.ephemeral-storage from node allocatable
Kubernetes e2e suite [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Downward API volume should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]
Kubernetes e2e suite [sig-storage] Downward API volume should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup]
Kubernetes e2e suite [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Dynamic Provisioning DynamicProvisioner Default should be disabled by changing the default annotation [Serial] [Disruptive]
Kubernetes e2e suite [sig-storage] Dynamic Provisioning DynamicProvisioner Default should be disabled by removing the default annotation [Serial] [Disruptive]
Kubernetes e2e suite [sig-storage] Dynamic Provisioning DynamicProvisioner Default should create and delete default persistent volumes [Slow]
Kubernetes e2e suite [sig-storage] Dynamic Provisioning DynamicProvisioner External should let an external dynamic provisioner create and delete persistent volumes [Slow]
Kubernetes e2e suite [sig-storage] Dynamic Provisioning DynamicProvisioner [Slow] [Feature:StorageProvider] deletion should be idempotent
Kubernetes e2e suite [sig-storage] Dynamic Provisioning DynamicProvisioner [Slow] [Feature:StorageProvider] should provision storage with different parameters
Kubernetes e2e suite [sig-storage] Dynamic Provisioning DynamicProvisioner [Slow] [Feature:StorageProvider] should provision storage with non-default reclaim policy Retain
Kubernetes e2e suite [sig-storage] Dynamic Provisioning DynamicProvisioner [Slow] [Feature:StorageProvider] should test that deleting a claim before the volume is provisioned deletes the volume.
Kubernetes e2e suite [sig-storage] Dynamic Provisioning GlusterDynamicProvisioner should create and delete persistent volumes [fast]
Kubernetes e2e suite [sig-storage] Dynamic Provisioning Invalid AWS KMS key should report an error and create no PV
Kubernetes e2e suite [sig-storage] EmptyDir volumes pod should support memory backed volumes of specified size
Kubernetes e2e suite [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]
Kubernetes e2e suite [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] files with FSGroup ownership should support (root,0644,tmpfs)
Kubernetes e2e suite [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] new files should be created with FSGroup ownership when container is non-root
Kubernetes e2e suite [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] new files should be created with FSGroup ownership when container is root
Kubernetes e2e suite [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] nonexistent volume subPath should have the correct mode and owner using FSGroup
Kubernetes e2e suite [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] volume on default medium should have the correct mode using FSGroup
Kubernetes e2e suite [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] volume on tmpfs should have the correct mode using FSGroup
Kubernetes e2e suite [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for git_repo [Serial] [Slow]
Kubernetes e2e suite [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]
Kubernetes e2e suite [sig-storage] Ephemeralstorage When pod refers to non-existent ephemeral storage should allow deletion of pod with invalid volume : configmap
Kubernetes e2e suite [sig-storage] Ephemeralstorage When pod refers to non-existent ephemeral storage should allow deletion of pod with invalid volume : projected
Kubernetes e2e suite [sig-storage] Ephemeralstorage When pod refers to non-existent ephemeral storage should allow deletion of pod with invalid volume : secret
Kubernetes e2e suite [sig-storage] Flexvolumes should be mountable when attachable [Feature:Flexvolumes]
Kubernetes e2e suite [sig-storage] Flexvolumes should be mountable when non-attachable
Kubernetes e2e suite [sig-storage] GKE local SSD [Feature:GKELocalSSD] should write and read from node local SSD [Feature:GKELocalSSD]
Kubernetes e2e suite [sig-storage] GenericPersistentVolume[Disruptive] When kubelet restarts Should test that a file written to the mount before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] GenericPersistentVolume[Disruptive] When kubelet restarts Should test that a volume mounted to a pod that is deleted while the kubelet is down unmounts when the kubelet returns.
Kubernetes e2e suite [sig-storage] GenericPersistentVolume[Disruptive] When kubelet restarts Should test that a volume mounted to a pod that is force deleted while the kubelet is down unmounts when the kubelet returns.
Kubernetes e2e suite [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance]
Kubernetes e2e suite [sig-storage] HostPath should support r/w [NodeConformance]
Kubernetes e2e suite [sig-storage] HostPath should support subPath [NodeConformance]
Kubernetes e2e suite [sig-storage] HostPathType Block Device [Slow] Should be able to mount block device 'ablkdev' successfully when HostPathType is HostPathBlockDev
Kubernetes e2e suite [sig-storage] HostPathType Block Device [Slow] Should be able to mount block device 'ablkdev' successfully when HostPathType is HostPathUnset
Kubernetes e2e suite [sig-storage] HostPathType Block Device [Slow] Should fail on mounting block device 'ablkdev' when HostPathType is HostPathCharDev
Kubernetes e2e suite [sig-storage] HostPathType Block Device [Slow] Should fail on mounting block device 'ablkdev' when HostPathType is HostPathDirectory
Kubernetes e2e suite [sig-storage] HostPathType Block Device [Slow] Should fail on mounting block device 'ablkdev' when HostPathType is HostPathFile
Kubernetes e2e suite [sig-storage] HostPathType Block Device [Slow] Should fail on mounting block device 'ablkdev' when HostPathType is HostPathSocket
Kubernetes e2e suite [sig-storage] HostPathType Block Device [Slow] Should fail on mounting non-existent block device 'does-not-exist-blk-dev' when HostPathType is HostPathBlockDev
Kubernetes e2e suite [sig-storage] HostPathType Character Device [Slow] Should be able to mount character device 'achardev' successfully when HostPathType is HostPathCharDev
Kubernetes e2e suite [sig-storage] HostPathType Character Device [Slow] Should be able to mount character device 'achardev' successfully when HostPathType is HostPathUnset
Kubernetes e2e suite [sig-storage] HostPathType Character Device [Slow] Should fail on mounting character device 'achardev' when HostPathType is HostPathBlockDev
Kubernetes e2e suite [sig-storage] HostPathType Character Device [Slow] Should fail on mounting character device 'achardev' when HostPathType is HostPathDirectory
Kubernetes e2e suite [sig-storage] HostPathType Character Device [Slow] Should fail on mounting character device 'achardev' when HostPathType is HostPathFile
Kubernetes e2e suite [sig-storage] HostPathType Character Device [Slow] Should fail on mounting character device 'achardev' when HostPathType is HostPathSocket
Kubernetes e2e suite [sig-storage] HostPathType Character Device [Slow] Should fail on mounting non-existent character device 'does-not-exist-char-dev' when HostPathType is HostPathCharDev
Kubernetes e2e suite [sig-storage] HostPathType Directory [Slow] Should be able to mount directory 'adir' successfully when HostPathType is HostPathDirectory
Kubernetes e2e suite [sig-storage] HostPathType Directory [Slow] Should be able to mount directory 'adir' successfully when HostPathType is HostPathUnset
Kubernetes e2e suite [sig-storage] HostPathType Directory [Slow] Should fail on mounting directory 'adir' when HostPathType is HostPathBlockDev
Kubernetes e2e suite [sig-storage] HostPathType Directory [Slow] Should fail on mounting directory 'adir' when HostPathType is HostPathCharDev
Kubernetes e2e suite [sig-storage] HostPathType Directory [Slow] Should fail on mounting directory 'adir' when HostPathType is HostPathFile
Kubernetes e2e suite [sig-storage] HostPathType Directory [Slow] Should fail on mounting directory 'adir' when HostPathType is HostPathSocket
Kubernetes e2e suite [sig-storage] HostPathType Directory [Slow] Should fail on mounting non-existent directory 'does-not-exist-dir' when HostPathType is HostPathDirectory
Kubernetes e2e suite [sig-storage] HostPathType File [Slow] Should be able to mount file 'afile' successfully when HostPathType is HostPathFile
Kubernetes e2e suite [sig-storage] HostPathType File [Slow] Should be able to mount file 'afile' successfully when HostPathType is HostPathUnset
Kubernetes e2e suite [sig-storage] HostPathType File [Slow] Should fail on mounting file 'afile' when HostPathType is HostPathBlockDev
Kubernetes e2e suite [sig-storage] HostPathType File [Slow] Should fail on mounting file 'afile' when HostPathType is HostPathCharDev
Kubernetes e2e suite [sig-storage] HostPathType File [Slow] Should fail on mounting file 'afile' when HostPathType is HostPathDirectory
Kubernetes e2e suite [sig-storage] HostPathType File [Slow] Should fail on mounting file 'afile' when HostPathType is HostPathSocket
Kubernetes e2e suite [sig-storage] HostPathType File [Slow] Should fail on mounting non-existent file 'does-not-exist-file' when HostPathType is HostPathFile
Kubernetes e2e suite [sig-storage] HostPathType Socket [Slow] Should be able to mount socket 'asocket' successfully when HostPathType is HostPathSocket
Kubernetes e2e suite [sig-storage] HostPathType Socket [Slow] Should be able to mount socket 'asocket' successfully when HostPathType is HostPathUnset
Kubernetes e2e suite [sig-storage] HostPathType Socket [Slow] Should fail on mounting non-existent socket 'does-not-exist-socket' when HostPathType is HostPathSocket
Kubernetes e2e suite [sig-storage] HostPathType Socket [Slow] Should fail on mounting socket 'asocket' when HostPathType is HostPathBlockDev
Kubernetes e2e suite [sig-storage] HostPathType Socket [Slow] Should fail on mounting socket 'asocket' when HostPathType is HostPathCharDev
Kubernetes e2e suite [sig-storage] HostPathType Socket [Slow] Should fail on mounting socket 'asocket' when HostPathType is HostPathDirectory
Kubernetes e2e suite [sig-storage] HostPathType Socket [Slow] Should fail on mounting socket 'asocket' when HostPathType is HostPathFile
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] provisioning should mount multiple PV pointing to the same storage on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with any volume data source [Serial]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with mount options
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with pvc data source
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with pvc data source in parallel [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] volume-stress multiple pods should access different volumes repeatedly [Slow] [Serial]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] capacity provides storage capacity information
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with different fsgroup applied to the volume contents
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with same fsgroup applied to the volume contents
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with different fsgroup applied to the volume contents
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with same fsgroup skips ownership changes to the volume contents
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] provisioning should mount multiple PV pointing to the same storage on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with any volume data source [Serial]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with mount options
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source in parallel [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] volume-stress multiple pods should access different volumes repeatedly [Slow] [Serial]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (delayed binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (delayed binding)] topology should provision a volume and schedule a pod with AllowedTopologies
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ext3)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ext4)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits should support volume limits [Serial]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits should verify that all csinodes have volume limits
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (immediate binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (immediate binding)] topology should provision a volume and schedule a pod with AllowedTopologies
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should mount multiple PV pointing to the same storage on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with any volume data source [Serial]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with mount options
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with pvc data source
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with pvc data source in parallel [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support existing directory
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (xfs)][Slow] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should create read-only inline ephemeral volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should create read/write inline ephemeral volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should support multiple inline ephemeral volumes
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should support two pods which have the same volume definition
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should create read-only inline ephemeral volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should create read/write inline ephemeral volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should support multiple inline ephemeral volumes
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should support two pods which have the same volume definition
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should create read-only inline ephemeral volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should create read/write inline ephemeral volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support multiple inline ephemeral volumes
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support two pods which have the same volume definition
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Generic Ephemeral-volume (default fs)] volumeLimits should support volume limits [Serial]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Generic Ephemeral-volume (default fs)] volumeLimits should verify that all csinodes have volume limits
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should support existing directory
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (ext3)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (ext4)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (xfs)][Slow] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (ext3)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (ext4)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (xfs)][Slow] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (block volmode)] provisioning should mount multiple PV pointing to the same storage on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with any volume data source [Serial]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with mount options
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with pvc data source
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with pvc data source in parallel [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (block volmode)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (block volmode)] volume-stress multiple pods should access different volumes repeatedly [Slow] [Serial]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (block volmode)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] capacity provides storage capacity information
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with different fsgroup applied to the volume contents
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with same fsgroup applied to the volume contents
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with different fsgroup applied to the volume contents
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with same fsgroup skips ownership changes to the volume contents
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] provisioning should mount multiple PV pointing to the same storage on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with any volume data source [Serial]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with mount options
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source in parallel [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] volume-stress multiple pods should access different volumes repeatedly [Slow] [Serial]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (delayed binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (delayed binding)] topology should provision a volume and schedule a pod with AllowedTopologies
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ext3)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ext4)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits should support volume limits [Serial]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits should verify that all csinodes have volume limits
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (immediate binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (immediate binding)] topology should provision a volume and schedule a pod with AllowedTopologies
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should mount multiple PV pointing to the same storage on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with any volume data source [Serial]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with mount options
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with pvc data source
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with pvc data source in parallel [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support existing directory
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (xfs)][Slow] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should create read-only inline ephemeral volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should create read/write inline ephemeral volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should support multiple inline ephemeral volumes
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should support two pods which have the same volume definition
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should create read-only inline ephemeral volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should create read/write inline ephemeral volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should support multiple inline ephemeral volumes
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should support two pods which have the same volume definition
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should create read-only inline ephemeral volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should create read/write inline ephemeral volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support multiple inline ephemeral volumes
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support two pods which have the same volume definition
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Generic Ephemeral-volume (default fs)] volumeLimits should support volume limits [Serial]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Generic Ephemeral-volume (default fs)] volumeLimits should verify that all csinodes have volume limits
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (default fs)] subPath should support existing directory
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (ext3)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (ext4)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (xfs)][Slow] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (block volmode)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (ext3)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (ext4)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (xfs)][Slow] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (block volmode)] provisioning should mount multiple PV pointing to the same storage on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with any volume data source [Serial]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with mount options
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with pvc data source
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with pvc data source in parallel [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (block volmode)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (block volmode)] volume-stress multiple pods should access different volumes repeatedly [Slow] [Serial]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (block volmode)] volumeMode should fail in binding dynamic provisioned PV to PVC [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (block volmode)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] capacity provides storage capacity information
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with different fsgroup applied to the volume contents
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with same fsgroup applied to the volume contents
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with different fsgroup applied to the volume contents
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with same fsgroup skips ownership changes to the volume contents
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] provisioning should mount multiple PV pointing to the same storage on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with any volume data source [Serial]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with mount options
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source in parallel [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] volume-stress multiple pods should access different volumes repeatedly [Slow] [Serial]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (delayed binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (delayed binding)] topology should provision a volume and schedule a pod with AllowedTopologies
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ext3)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ext4)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits should support volume limits [Serial]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits should verify that all csinodes have volume limits
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (immediate binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (immediate binding)] topology should provision a volume and schedule a pod with AllowedTopologies
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should mount multiple PV pointing to the same storage on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with any volume data source [Serial]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with mount options
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with pvc data source
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with pvc data source in parallel [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support existing directory
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (xfs)][Slow] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should create read-only inline ephemeral volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should create read/write inline ephemeral volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should support multiple inline ephemeral volumes
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should support two pods which have the same volume definition
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should create read-only inline ephemeral volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should create read/write inline ephemeral volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should support multiple inline ephemeral volumes
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should support two pods which have the same volume definition
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should create read-only inline ephemeral volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should create read/write inline ephemeral volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support multiple inline ephemeral volumes
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support two pods which have the same volume definition
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Generic Ephemeral-volume (default fs)] volumeLimits should support volume limits [Serial]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Generic Ephemeral-volume (default fs)] volumeLimits should verify that all csinodes have volume limits
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Inline-volume (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Inline-volume (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Inline-volume (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Inline-volume (default fs)] subPath should support existing directory
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Inline-volume (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Inline-volume (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Inline-volume (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Inline-volume (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Inline-volume (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Inline-volume (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Inline-volume (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Inline-volume (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Inline-volume (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Inline-volume (ext3)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Inline-volume (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Inline-volume (ext4)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Inline-volume (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Inline-volume (xfs)][Slow] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should fail to create pod by failing to mount volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (block volmode)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (ext3)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (ext4)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (xfs)][Slow] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (block volmode)] provisioning should mount multiple PV pointing to the same storage on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with any volume data source [Serial]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with mount options
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with pvc data source
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with pvc data source in parallel [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (block volmode)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (block volmode)] volume-stress multiple pods should access different volumes repeatedly [Slow] [Serial]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (block volmode)] volumeMode should fail in binding dynamic provisioned PV to PVC [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (block volmode)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] capacity provides storage capacity information
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with different fsgroup applied to the volume contents
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with same fsgroup applied to the volume contents
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with different fsgroup applied to the volume contents
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with same fsgroup skips ownership changes to the volume contents
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] provisioning should mount multiple PV pointing to the same storage on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with any volume data source [Serial]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with mount options
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source in parallel [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] volume-stress multiple pods should access different volumes repeatedly [Slow] [Serial]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (delayed binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (delayed binding)] topology should provision a volume and schedule a pod with AllowedTopologies
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ext3)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ext4)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits should support volume limits [Serial]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits should verify that all csinodes have volume limits
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (immediate binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (immediate binding)] topology should provision a volume and schedule a pod with AllowedTopologies
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should mount multiple PV pointing to the same storage on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with any volume data source [Serial]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with mount options
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with pvc data source
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with pvc data source in parallel [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support existing directory
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (xfs)][Slow] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should create read-only inline ephemeral volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should create read/write inline ephemeral volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should support multiple inline ephemeral volumes
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should support two pods which have the same volume definition
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should create read-only inline ephemeral volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should create read/write inline ephemeral volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should support multiple inline ephemeral volumes
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should support two pods which have the same volume definition
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should create read-only inline ephemeral volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should create read/write inline ephemeral volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support multiple inline ephemeral volumes
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support two pods which have the same volume definition
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Generic Ephemeral-volume (default fs)] volumeLimits should support volume limits [Serial]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Generic Ephemeral-volume (default fs)] volumeLimits should verify that all csinodes have volume limits
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Inline-volume (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Inline-volume (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Inline-volume (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Inline-volume (default fs)] subPath should support existing directory
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Inline-volume (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Inline-volume (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Inline-volume (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Inline-volume (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Inline-volume (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Inline-volume (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Inline-volume (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Inline-volume (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Inline-volume (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Inline-volume (ext3)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Inline-volume (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Inline-volume (ext4)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Inline-volume (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Inline-volume (xfs)][Slow] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should fail to create pod by failing to mount volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (block volmode)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (ext3)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (ext4)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (xfs)][Slow] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (block volmode)] provisioning should mount multiple PV pointing to the same storage on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with any volume data source [Serial]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with mount options
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with pvc data source
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with pvc data source in parallel [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (block volmode)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (block volmode)] volume-stress multiple pods should access different volumes repeatedly [Slow] [Serial]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (block volmode)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (default fs)] capacity provides storage capacity information
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with different fsgroup applied to the volume contents
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with same fsgroup applied to the volume contents
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with different fsgroup applied to the volume contents
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with same fsgroup skips ownership changes to the volume contents
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (default fs)] provisioning should mount multiple PV pointing to the same storage on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with any volume data source [Serial]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with mount options
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source in parallel [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (default fs)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (default fs)] volume-stress multiple pods should access different volumes repeatedly [Slow] [Serial]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (delayed binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (delayed binding)] topology should provision a volume and schedule a pod with AllowedTopologies
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (ext3)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (ext4)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits should support volume limits [Serial]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits should verify that all csinodes have volume limits
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (immediate binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (immediate binding)] topology should provision a volume and schedule a pod with AllowedTopologies
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should mount multiple PV pointing to the same storage on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with any volume data source [Serial]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with mount options
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with pvc data source
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with pvc data source in parallel [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support existing directory
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (xfs)][Slow] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should create read-only inline ephemeral volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should create read/write inline ephemeral volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should support multiple inline ephemeral volumes
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should support two pods which have the same volume definition
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should create read-only inline ephemeral volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should create read/write inline ephemeral volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should support multiple inline ephemeral volumes
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should support two pods which have the same volume definition
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should create read-only inline ephemeral volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should create read/write inline ephemeral volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support multiple inline ephemeral volumes
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support two pods which have the same volume definition
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Generic Ephemeral-volume (default fs)] volumeLimits should support volume limits [Serial]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Generic Ephemeral-volume (default fs)] volumeLimits should verify that all csinodes have volume limits
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Inline-volume (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Inline-volume (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Inline-volume (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Inline-volume (default fs)] subPath should support existing directory
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Inline-volume (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Inline-volume (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Inline-volume (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Inline-volume (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Inline-volume (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Inline-volume (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Inline-volume (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Inline-volume (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Inline-volume (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Inline-volume (ext3)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Inline-volume (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Inline-volume (ext4)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Inline-volume (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Inline-volume (xfs)][Slow] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (block volmode)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (ext3)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (ext4)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (xfs)][Slow] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (block volmode)] provisioning should mount multiple PV pointing to the same storage on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with any volume data source [Serial]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with mount options
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with pvc data source
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with pvc data source in parallel [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (block volmode)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (block volmode)] volume-stress multiple pods should access different volumes repeatedly [Slow] [Serial]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (block volmode)] volumeMode should fail in binding dynamic provisioned PV to PVC [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (block volmode)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (default fs)] capacity provides storage capacity information
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with different fsgroup applied to the volume contents
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with same fsgroup applied to the volume contents
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with different fsgroup applied to the volume contents
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with same fsgroup skips ownership changes to the volume contents
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (default fs)] provisioning should mount multiple PV pointing to the same storage on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with any volume data source [Serial]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with mount options
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source in parallel [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (default fs)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (default fs)] volume-stress multiple pods should access different volumes repeatedly [Slow] [Serial]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (delayed binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (delayed binding)] topology should provision a volume and schedule a pod with AllowedTopologies
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (ext3)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (ext4)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits should support volume limits [Serial]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits should verify that all csinodes have volume limits
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (immediate binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (immediate binding)] topology should provision a volume and schedule a pod with AllowedTopologies
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should mount multiple PV pointing to the same storage on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with any volume data source [Serial]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with mount options
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with pvc data source
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with pvc data source in parallel [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should be able to unmoun