Result | FAILURE |
Tests | 3 failed / 892 succeeded |
Started | |
Elapsed | 42m6s |
Revision | master |
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-node\]\sRuntimeClass\sshould\srun\sa\sPod\srequesting\sa\sRuntimeClass\swith\sa\sconfigured\shandler\s\[NodeFeature\:RuntimeHandler\]$'
[FAILED] timed out while waiting for pod runtimeclass-4351/test-runtimeclass-runtimeclass-4351-preconfigured-handler-5wrj4 to be Succeeded or Failed In [It] at: test/e2e/common/node/runtimeclass.go:386 @ 01/31/23 23:33:07.822from junit_01.xml
> Enter [BeforeEach] [sig-node] RuntimeClass - set up framework | framework.go:188 @ 01/31/23 23:28:00.456 STEP: Creating a kubernetes client - test/e2e/framework/framework.go:208 @ 01/31/23 23:28:00.456 Jan 31 23:28:00.456: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename runtimeclass - test/e2e/framework/framework.go:247 @ 01/31/23 23:28:00.457 E0131 23:28:00.667796 6788 retrywatcher.go:130] "Watch failed" err="context canceled" STEP: Waiting for a default service account to be provisioned in namespace - test/e2e/framework/framework.go:256 @ 01/31/23 23:28:01.178 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace - test/e2e/framework/framework.go:259 @ 01/31/23 23:28:01.654 E0131 23:28:01.668579 6788 retrywatcher.go:130] "Watch failed" err="context canceled" < Exit [BeforeEach] [sig-node] RuntimeClass - set up framework | framework.go:188 @ 01/31/23 23:28:02.13 (1.675s) > Enter [BeforeEach] [sig-node] RuntimeClass - test/e2e/framework/metrics/init/init.go:33 @ 01/31/23 23:28:02.13 < Exit [BeforeEach] [sig-node] RuntimeClass - test/e2e/framework/metrics/init/init.go:33 @ 01/31/23 23:28:02.13 (0s) > Enter [It] should run a Pod requesting a RuntimeClass with a configured handler [NodeFeature:RuntimeHandler] - test/e2e/common/node/runtimeclass.go:85 @ 01/31/23 23:28:02.13 Jan 31 23:28:02.613: INFO: Waiting up to 5m0s for pod "hostexec-i-0f27661869b93b9cf-6jhph" in namespace "runtimeclass-4351" to be "running" E0131 23:28:02.669075 6788 retrywatcher.go:130] "Watch failed" err="context canceled" Jan 31 23:28:02.853: INFO: Pod "hostexec-i-0f27661869b93b9cf-6jhph": Phase="Pending", Reason="", readiness=false. Elapsed: 239.343932ms E0131 23:28:03.669409 6788 retrywatcher.go:130] "Watch failed" err="context canceled" E0131 23:28:04.669986 6788 retrywatcher.go:130] "Watch failed" err="context canceled" Jan 31 23:28:05.095: INFO: Pod "hostexec-i-0f27661869b93b9cf-6jhph": Phase="Running", Reason="", readiness=true. Elapsed: 2.481934233s Jan 31 23:28:05.095: INFO: Pod "hostexec-i-0f27661869b93b9cf-6jhph" satisfied condition "running" Jan 31 23:28:05.095: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c if [ -e '/etc/containerd/config.toml' ]; then grep -q 'runtimes.test-handler' /etc/containerd/config.toml return fi if [ -e '/etc/crio/crio.conf' ]; then grep -q 'runtimes.test-handler' /etc/crio/crio.conf return fi ] Namespace:runtimeclass-4351 PodName:hostexec-i-0f27661869b93b9cf-6jhph ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jan 31 23:28:05.095: INFO: >>> kubeConfig: /root/.kube/config Jan 31 23:28:05.096: INFO: ExecWithOptions: Clientset creation Jan 31 23:28:05.096: INFO: ExecWithOptions: execute(POST https://api-e2e-e2e-kops-warm-poo-agujp3-6251a9cd4adb6f29.elb.ap-south-1.amazonaws.com/api/v1/namespaces/runtimeclass-4351/pods/hostexec-i-0f27661869b93b9cf-6jhph/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=if+%5B+-e+%27%2Fetc%2Fcontainerd%2Fconfig.toml%27+%5D%3B+then%0Agrep+-q+%27runtimes.test-handler%27+%2Fetc%2Fcontainerd%2Fconfig.toml%0A++return%0Afi%0A%0Aif+%5B+-e+%27%2Fetc%2Fcrio%2Fcrio.conf%27+%5D%3B+then%0A++grep+-q+%27runtimes.test-handler%27+%2Fetc%2Fcrio%2Fcrio.conf%0A++return%0Afi%0A&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) E0131 23:28:05.670802 6788 retrywatcher.go:130] "Watch failed" err="context canceled" E0131 23:28:06.671741 6788 retrywatcher.go:130] "Watch failed" err="context canceled" Jan 31 23:28:07.103: INFO: Waiting up to 5m0s for pod "test-runtimeclass-runtimeclass-4351-preconfigured-handler-5wrj4" in namespace "runtimeclass-4351" to be "Succeeded or Failed" Jan 31 23:28:07.342: INFO: Pod "test-runtimeclass-runtimeclass-4351-preconfigured-handler-5wrj4": Phase="Pending", Reason="", readiness=false. Elapsed: 239.28489ms E0131 23:28:07.672457 6788 retrywatcher.go:130] "Watch failed" err="context canceled" E0131 23:28:08.673058 6788 retrywatcher.go:130] "Watch failed" err="context canceled" Jan 31 23:28:09.582: INFO: Pod "test-runtimeclass-runtimeclass-4351-preconfigured-handler-5wrj4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.479139307s E0131 23:28:09.673592 6788 retrywatcher.go:130] "Watch failed" err="context canceled" E0131 23:28:10.674181 6788 retrywatcher.go:130] "Watch failed" err="context canceled" Jan 31 23:28:11.581: INFO: Pod "test-runtimeclass-runtimeclass-4351-preconfigured-handler-5wrj4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.477992876s E0131 23:28:11.674355 6788 retrywatcher.go:130] "Watch failed" err="context canceled" E0131 23:28:12.675125 6788 retrywatcher.go:130] "Watch failed" err="context canceled" Jan 31 23:28:13.582: INFO: Pod "test-runtimeclass-runtimeclass-4351-preconfigured-handler-5wrj4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.47904123s E0131 23:28:13.675536 6788 retrywatcher.go:130] "Watch failed" err="context canceled" E0131 23:28:14.676292 6788 retrywatcher.go:130] "Watch failed" err="context canceled" Jan 31 23:28:15.581: INFO: Pod "test-runtimeclass-runtimeclass-4351-preconfigured-handler-5wrj4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.47833238s E0131 23:28:15.676820 6788 retrywatcher.go:130] "Watch failed" err="context canceled" E0131 23:28:16.677174 6788 retrywatcher.go:130] "Watch failed" err="context canceled" Jan 31 23:28:17.583: INFO: Pod "test-runtimeclass-runtimeclass-4351-preconfigured-handler-5wrj4": Phase="Pending", Reason="", readiness=false. Elapsed: 10.479966331s E0131 23:28:17.677778 6788 retrywatcher.go:130] "Watch failed" err="context canceled" E0131 23:28:18.678838 6788 retrywatcher.go:130] "Watch failed" err="context canceled" Jan 31 23:28:19.582: INFO: Pod "test-runtimeclass-runtimeclass-4351-preconfigured-handler-5wrj4": Phase="Pending", Reason="", readiness=false. Elapsed: 12.478972061s E0131 23:28:19.679326 6788 retrywatcher.go:130] "Watch failed" err="context canceled" E0131 23:28:20.679365 6788 retrywatcher.go:130] "Watch failed" err="context canceled" Jan 31 23:28:21.582: INFO: Pod "test-runtimeclass-runtimeclass-4351-preconfigured-handler-5wrj4": Phase="Pending", Reason="", readiness=false. Elapsed: 14.478941042s E0131 23:28:21.680278 6788 retrywatcher.go:130] "Watch failed" err="context canceled" E0131 23:28:22.680893 6788 retrywatcher.go:130] "Watch failed" err="context canceled" Jan 31 23:28:23.582: INFO: Pod "test-runtimeclass-runtimeclass-4351-preconfigured-handler-5wrj4": Phase="Pending", Reason="", readiness=false. Elapsed: 16.478987628s E0131 23:28:23.681439 6788 retrywatcher.go:130] "Watch failed" err="context canceled" E0131 23:28:24.682078 6788 retrywatcher.go:130] "Watch failed" err="context canceled" Jan 31 23:28:25.582: INFO: Pod "test-runtimeclass-runtimeclass-4351-preconfigured-handler-5wrj4": Phase="Pending", Reason="", readiness=false. Elapsed: 18.479561332s E0131 23:28:25.682826 6788 retrywatcher.go:130] "Watch failed" err="context canceled" E0131 23:28:26.683259 6788 retrywatcher.go:130] "Watch failed" err="context canceled" Jan 31 23:28:27.582: INFO: Pod "test-runtimeclass-runtimeclass-4351-preconfigured-handler-5wrj4": Phase="Pending", Reason="", readiness=false. Elapsed: 20.479572676s E0131 23:28:27.683886 6788 retrywatcher.go:130] "Watch failed" err="context canceled" E0131 23:28:28.684194 6788 retrywatcher.go:130] "Watch failed" err="context canceled" Jan 31 23:28:29.582: INFO: Pod "test-runtimeclass-runtimeclass-4351-preconfigured-handler-5wrj4": Phase="Pending", Reason="", readiness=false. Elapsed: 22.479105319s E0131 23:28:29.684442 6788 retrywatcher.go:130] "Watch failed" err="context canceled" E0131 23:28:30.685005 6788 retrywatcher.go:130] "Watch failed" err="context canceled" Jan 31 23:28:31.582: INFO: Pod "test-runtimeclass-runtimeclass-4351-preconfigured-handler-5wrj4": Phase="Pending", Reason="", readiness=false. Elapsed: 24.47928747s E0131 23:28:31.685694 6788 retrywatcher.go:130] "Watch failed" err="context canceled" E0131 23:28:32.686379 6788 retrywatcher.go:130] "Watch failed" err="context canceled" Jan 31 23:28:33.582: INFO: Pod "test-runtimeclass-runtimeclass-4351-preconfigured-handler-5wrj4": Phase="Pending", Reason="", readiness=false. Elapsed: 26.478950296s E0131 23:28:33.687342 6788 retrywatcher.go:130] "Watch failed" err="context canceled" E0131 23:28:34.688291 6788 retrywatcher.go:130] "Watch failed" err="context canceled" Jan 31 23:28:35.582: INFO: Pod "test-runtimeclass-runtimeclass-4351-preconfigured-handler-5wrj4": Phase="Pending", Reason="", readiness=false. Elapsed: 28.479311478s E0131 23:28:35.688633 6788 retrywatcher.go:130] "Watch failed" err="context canceled" E0131 23:28:36.688819 6788 retrywatcher.go:130] "Watch failed" err="context canceled" Jan 31 23:28:37.581: INFO: Pod "test-runtimeclass-runtimeclass-4351-preconfigured-handler-5wrj4": Phase="Pending", Reason="", readiness=false. Elapsed: 30.478606837s E0131 23:28:37.688981 6788 retrywatcher.go:130] "Watch failed" err="context canceled" E0131 23:28:38.689886 6788 retrywatcher.go:130] "Watch failed" err="context canceled" Jan 31 23:28:39.582: INFO: Pod "test-runtimeclass-runtimeclass-4351-preconfigured-handler-5wrj4": Phase="Pending", Reason="", readiness=false. Elapsed: 32.478926225s E0131 23:28:39.690398 6788 retrywatcher.go:130] "Watch failed" err="context canceled" E0131 23:28:40.690556 6788 retrywatcher.go:130] "Watch failed" err="context canceled" Jan 31 23:28:41.582: INFO: Pod "test-runtimeclass-runtimeclass-4351-preconfigured-handler-5wrj4": Phase="Pending", Reason="", readiness=false. Elapsed: 34.479346807s E0131 23:28:41.690675 6788 retrywatcher.go:130] "Watch failed" err="context canceled" E0131 23:28:42.691677 6788 retrywatcher.go:130] "Watch failed" err="context canceled" Jan 31 23:28:43.583: INFO: Pod "test-runtimeclass-runtimeclass-4351-preconfigured-handler-5wrj4": Phase="Pending", Reason="", readiness=false. Elapsed: 36.480856828s E0131 23:28:43.692374 6788 retrywatcher.go:130] "Watch failed" err="context canceled" E0131 23:28:44.693528 6788 retrywatcher.go:130] "Watch failed" err="context canceled" Jan 31 23:28:45.582: INFO: Pod "test-runtimeclass-runtimeclass-4351-preconfigured-handler-5wrj4": Phase="Pending", Reason="", readiness=false. Elapsed: 38.479647724s E0131 23:28:45.694096 6788 retrywatcher.go:130] "Watch failed" err="context canceled" E0131 23:28:46.694531 6788 retrywatcher.go:130] "Watch failed" err="context canceled" Jan 31 23:28:47.582: INFO: Pod "test-runtimeclass-runtimeclass-4351-preconfigured-handler-5wrj4": Phase="Pending", Reason="", readiness=false. Elapsed: 40.479535798s E0131 23:28:47.694974 6788 retrywatcher.go:130] "Watch failed" err="context canceled" E0131 23:28:48.695428 6788 retrywatcher.go:130] "Watch failed" err="context canceled" Jan 31 23:28:49.581: INFO: Pod "test-runtimeclass-runtimeclass-4351-preconfigured-handler-5wrj4": Phase="Pending", Reason="", readiness=false. Elapsed: 42.478809159s E0131 23:28:49.696146 6788 retrywatcher.go:130] "Watch failed" err="context canceled" E0131 23:28:50.696412 6788 retrywatcher.go:130] "Watch failed" err="context canceled" Jan 31 23:28:51.581: INFO: Pod "test-runtimeclass-runtimeclass-4351-preconfigured-handler-5wrj4": Phase="Pending", Reason="", readiness=false. Elapsed: 44.478872154s E0131 23:28:51.697350 6788 retrywatcher.go:130] "Watch failed" err="context canceled" E0131 23:28:52.698422 6788 retrywatcher.go:130] "Watch failed" err="context canceled" Jan 31 23:28:53.582: INFO: Pod "test-runtimeclass-runtimeclass-4351-preconfigured-handler-5wrj4": Phase="Pending", Reason="", readiness=false. Elapsed: 46.479409825s E0131 23:28:53.698813 6788 retrywatcher.go:130] "Watch failed" err="context canceled" E0131 23:28:54.699092 6788 retrywatcher.go:130] "Watch failed" err="context canceled" Jan 31 23:28:55.581: INFO: Pod "test-runtimeclass-runtimeclass-4351-preconfigured-handler-5wrj4": Phase="Pending", Reason="", readiness=false. Elapsed: 48.478345817s E0131 23:28:55.699876 6788 retrywatcher.go:130] "Watch failed" err="context canceled" E0131 23:28:56.700424 6788 retrywatcher.go:130] "Watch failed" err="context canceled" Jan 31 23:28:57.582: INFO: Pod "test-runtimeclass-runtimeclass-4351-preconfigured-handler-5wrj4": Phase="Pending", Reason="", readiness=false. Elapsed: 50.479258004s E0131 23:28:57.700676 6788 retrywatcher.go:130] "Watch failed" err="context canceled" E0131 23:28:58.701352 6788 retrywatcher.go:130] "Watch failed" err="context canceled" Jan 31 23:28:59.582: INFO: Pod "test-runtimeclass-runtimeclass-4351-preconfigured-handler-5wrj4": Phase="Pending", Reason="", readiness=false. Elapsed: 52.479387436s E0131 23:28:59.701865 6788 retrywatcher.go:130] "Watch failed" err="context canceled" E0131 23:29:00.702794 6788 retrywatcher.go:130] "Watch failed" err="context canceled" Jan 31 23:29:01.582: INFO: Pod "test-runtimeclass-runtimeclass-4351-preconfigured-handler-5wrj4": Phase="Pending", Reason="", readiness=false. Elapsed: 54.479616975s E0131 23:29:01.702919 6788 retrywatcher.go:130] "Watch failed" err="context canceled" E0131 23:29:02.702932 6788 retrywatcher.go:130] "Watch failed" err="context canceled" Jan 31 23:29:03.582: INFO: Pod "test-runtimeclass-runtimeclass-4351-preconfigured-handler-5wrj4": Phase="Pending", Reason="", readiness=false. Elapsed: 56.479856298s E0131 23:29:03.703290 6788 retrywatcher.go:130] "Watch failed" err="context canceled" E0131 23:29:04.703433 6788 retrywatcher.go:130] "Watch failed" err="context canceled" Jan 31 23:29:05.582: INFO: Pod "test-runtimeclass-runtimeclass-4351-preconfigured-handler-5wrj4": Phase="Pending", Reason="", readiness=false. Elapsed: 58.478932721s E0131 23:29:05.704371 6788 retrywatcher.go:130] "Watch failed" err="context canceled" E0131 23:29:06.705050 6788 retrywatcher.go:130] "Watch failed" err="context canceled" Jan 31 23:29:07.582: INFO: Pod "test-runtimeclass-runtimeclass-4351-preconfigured-handler-5wrj4": Phase="Pending", Reason="", readiness=false. Elapsed: 1m0.479257854s E0131 23:29:07.705771 6788 retrywatcher.go:130] "Watch failed" err="context canceled" E0131 23:29:08.706541 6788 retrywatcher.go:130] "Watch failed" err="context canceled" Jan 31 23:29:09.582: INFO: Pod "test-runtimeclass-runtimeclass-4351-preconfigured-handler-5wrj4": Phase="Pending", Reason="", readiness=false. Elapsed: 1m2.479225098s E0131 23:29:09.707565 6788 retrywatcher.go:130] "Watch failed" err="context canceled" E0131 23:29:10.708003 6788 retrywatcher.go:130] "Watch failed" err="context canceled" Jan 31 23:29:11.581: INFO: Pod "test-runtimeclass-runtimeclass-4351-preconfigured-handler-5wrj4": Phase="Pending", Reason="", readiness=false. Elapsed: 1m4.478581997s E0131 23:29:11.709098 6788 retrywatcher.go:130] "Watch failed" err="context canceled" E0131 23:29:12.709157 6788 retrywatcher.go:130] "Watch failed" err="context canceled" Jan 31 23:29:13.582: INFO: Pod "test-runtimeclass-runtimeclass-4351-preconfigured-handler-5wrj4": Phase="Pending", Reason="", readiness=false. Elapsed: 1m6.479747052s E0131 23:29:13.710162 6788 retrywatcher.go:130] "Watch failed" err="context canceled" E0131 23:29:14.710385 6788 retrywatcher.go:130] "Watch failed" err="context canceled" Jan 31 23:29:15.581: INFO: Pod "test-runtimeclass-runtimeclass-4351-preconfigured-handler-5wrj4": Phase="Pending", Reason="", readiness=false. Elapsed: 1m8.478868025s E0131 23:29:15.711368 6788 retrywatcher.go:130] "Watch failed" err="context canceled" E0131 23:29:16.712174 6788 retrywatcher.go:130] "Watch failed" err="context canceled" Jan 31 23:29:17.582: INFO: Pod "test-runtimeclass-runtimeclass-4351-preconfigured-handler-5wrj4": Phase="Pending", Reason="", readiness=false. Elapsed: 1m10.479332194s E0131 23:29:17.712739 6788 retrywatcher.go:130] "Watch failed" err="context canceled" E0131 23:29:18.713422 6788 retrywatcher.go:130] "Watch failed" err="context canceled" Jan 31 23:29:19.582: INFO: Pod "test-runtimeclass-runtimeclass-4351-preconfigured-handler-5wrj4": Phase="Pending", Reason="", readiness=false. Elapsed: 1m12.479162276s E0131 23:29:19.713530 6788 retrywatcher.go:130] "Watch failed" err="context canceled" E0131 23:29:20.714341 6788 retrywatcher.go:130] "Watch failed" err="context canceled" Jan 31 23:29:21.582: INFO: Pod "test-runtimeclass-runtimeclass-4351-preconfigured-handler-5wrj4": Phase="Pending", Reason="", readiness=false. Elapsed: 1m14.479562806s E0131 23:29:21.714982 6788 retrywatcher.go:130] "Watch failed" err="context canceled" E0131 23:29:22.715174 6788 retrywatcher.go:130] "Watch failed" err="context canceled" Jan 31 23:29:23.581: INFO: Pod "test-runtimeclass-runtimeclass-4351-preconfigured-handler-5wrj4": Phase="Pending", Reason="", readiness=false. Elapsed: 1m16.478846344s E0131 23:29:23.715245 6788 retrywatcher.go:130] "Watch failed" err="context canceled" E0131 23:29:24.715534 6788 retrywatcher.go:130] "Watch failed" err="context canceled" Jan 31 23:29:25.582: INFO: Pod "test-runtimeclass-runtimeclass-4351-preconfigured-handler-5wrj4": Phase="Pending", Reason="", readiness=false. Elapsed: 1m18.47905041s E0131 23:29:25.716486 6788 retrywatcher.go:130] "Watch failed" err="context canceled" E0131 23:29:26.717296 6788 retrywatcher.go:130] "Watch failed" err="context canceled" Jan 31 23:29:27.582: INFO: Pod "test-runtimeclass-runtimeclass-4351-preconfigured-handler-5wrj4": Phase="Pending", Reason="", readiness=false. Elapsed: 1m20.479271408s E0131 23:29:27.717758 6788 retrywatcher.go:130] "Watch failed" err="context canceled" E0131 23:29:28.718719 6788 retrywatcher.go:130] "Watch failed" err="context canceled" Jan 31 23:29:29.582: INFO: Pod "test-runtimeclass-runtimeclass-4351-preconfigured-handler-5wrj4": Phase="Pending", Reason="", readiness=false. Elapsed: 1m22.479528305s E0131 23:29:29.719339 6788 retrywatcher.go:130] "Watch failed" err="context canceled" E0131 23:29:30.719838 6788 retrywatcher.go:130] "Watch failed" err="context canceled" Jan 31 23:29:31.582: INFO: Pod "test-runtimeclass-runtimeclass-4351-preconfigured-handler-5wrj4": Phase="Pending", Reason="", readiness=false. Elapsed: 1m24.479655919s E0131 23:29:31.720047 6788 retrywatcher.go:130] "Watch failed" err="context canceled" E0131 23:29:32.720828 6788 retrywatcher.go:130] "Watch failed" err="context canceled" Jan 31 23:29:33.582: INFO: Pod "test-runtimeclass-runtimeclass-4351-preconfigured-handler-5wrj4": Phase="Pending", Reason="", readiness=false. Elapsed: 1m26.479628896s E0131 23:29:33.721059 6788 retrywatcher.go:130] "Watch failed" err="context canceled" E0131 23:29:34.721818 6788 retrywatcher.go:130] "Watch failed" err="context canceled" Jan 31 23:29:35.583: INFO: Pod "test-runtimeclass-runtimeclass-4351-preconfigured-handler-5wrj4": Phase="Pending", Reason="", readiness=false. Elapsed: 1m28.480338483s E0131 23:29:35.722888 6788 retrywatcher.go:130] "Watch failed" err="context canceled" E0131 23:29:36.723714 6788 retrywatcher.go:130] "Watch failed" err="context canceled" Jan 31 23:29:37.581: INFO: Pod "test-runtimeclass-runtimeclass-4351-preconfigured-handler-5wrj4": Phase="Pending", Reason="", readiness=false. Elapsed: 1m30.478790021s E0131 23:29:37.724141 6788 retrywatcher.go:130] "Watch failed" err="context canceled" E0131 23:29:38.724904 6788 retrywatcher.go:130] "Watch failed" err="context canceled" Jan 31 23:29:39.582: INFO: Pod "test-runtimeclass-runtimeclass-4351-preconfigured-handler-5wrj4": Phase="Pending", Reason="", readiness=false. Elapsed: 1m32.479611512s E0131 23:29:39.725020 6788 retrywatcher.go:130] "Watch failed" err="context canceled" E0131 23:29:40.725821 6788 retrywatcher.go:130] "Watch failed" err="context canceled" Jan 31 23:29:41.582: INFO: Pod "test-runtimeclass-runtimeclass-4351-preconfigured-handler-5wrj4": Phase="Pending", Reason="", readiness=false. Elapsed: 1m34.479632079s E0131 23:29:41.726116 6788 retrywatcher.go:130] "Watch failed" err="context canceled" E0131 23:29:42.726968 6788 retrywatcher.go:130] "Watch failed" err="context canceled" Jan 31 23:29:43.584: INFO: Pod "test-runtimeclass-runtimeclass-4351-preconfigured-handler-5wrj4": Phase="Pending", Reason="", readiness=false. Elapsed: 1m36.481202318s E0131 23:29:43.727755 6788 retrywatcher.go:130] "Watch failed" err="context canceled" E0131 23:29:44.728526 6788 retrywatcher.go:130] "Watch failed" err="context canceled" Jan 31 23:29:45.582: INFO: Pod "test-runtimeclass-runtimeclass-4351-preconfigured-handler-5wrj4": Phase="Pending", Reason="", readiness=false. Elapsed: 1m38.479091562s E0131 23:29:45.728692 6788 retrywatcher.go:130] "Watch failed" err="context canceled" E0131 23:29:46.729560 6788 retrywatcher.go:130] "Watch failed" err="context canceled" Jan 31 23:29:47.581: INFO: Pod "test-runtimeclass-runtimeclass-4351-preconfigured-handler-5wrj4": Phase="Pending", Reason="", readiness=false. Elapsed: 1m40.478906357s E0131 23:29:47.730355 6788 retrywatcher.go:130] "Watch failed" err="context canceled" E0131 23:29:48.731348 6788 retrywatcher.go:130] "Watch failed" err="context canceled" Jan 31 23:29:49.582: INFO: Pod "test-runtimeclass-runtimeclass-4351-preconfigured-handler-5wrj4": Phase="Pending", Reason="", readiness=false. Elapsed: 1m42.478993477s E0131 23:29:49.731391 6788 retrywatcher.go:130] "Watch failed" err="context canceled" E0131 23:29:50.731665 6788 retrywatcher.go:130] "Watch failed" err="context canceled" Jan 31 23:29:51.582: INFO: Pod "test-runtimeclass-runtimeclass-4351-preconfigured-handler-5wrj4": Phase="Pending", Reason="", readiness=false. Elapsed: 1m44.479522329s E0131 23:29:51.732058 6788 retrywatcher.go:130] "Watch failed" err="context canceled" E0131 23:29:52.732616 6788 retrywatcher.go:130] "Watch failed" err="context canceled" Jan 31 23:29:53.582: INFO: Pod "test-runtimeclass-runtimeclass-4351-preconfigured-handler-5wrj4": Phase="Pending", Reason="", readiness=false. Elapsed: 1m46.479793889s E0131 23:29:53.733234 6788 retrywatcher.go:130] "Watch failed" err="context canceled" E0131 23:29:54.733393 6788 retrywatcher.go:130] "Watch failed" err="context canceled" Jan 31 23:29:55.581: INFO: Pod "test-runtimeclass-runtimeclass-4351-preconfigured-handler-5wrj4": Phase="Pending", Reason="", readiness=false. Elapsed: 1m48.478427158s E0131 23:29:55.733961 6788 retrywatcher.go:130] "Watch failed" err="context canceled" E0131 23:29:56.734676 6788 retrywatcher.go:130] "Watch failed" err="context canceled" Jan 31 23:29:57.582: INFO: Pod "test-runtimeclass-runtimeclass-4351-preconfigured-handler-5wrj4": Phase="Pending", Reason="", readiness=false. Elapsed: 1m50.478982002s E0131 23:29:57.735381 6788 retrywatcher.go:130] "Watch failed" err="context canceled" E0131 23:29:58.735644 6788 retrywatcher.go:130] "Watch failed" err="context canceled" Jan 31 23:29:59.582: INFO: Pod "test-runtimeclass-runtimeclass-4351-preconfigured-handler-5wrj4": Phase="Pending", Reason="", readiness=false. Elapsed: 1m52.479415997s E0131 23:29:59.735855 6788 retrywatcher.go:130] "Watch failed" err="context canceled" E0131 23:30:00.736893 6788 retrywatcher.go:130] "Watch failed" err="context canceled" Jan 31 23:30:01.582: INFO: Pod "test-runtimeclass-runtimeclass-4351-preconfigured-handler-5wrj4": Phase="Pending", Reason="", readiness=false. Elapsed: 1m54.479409584s E0131 23:30:01.737853 6788 retrywatcher.go:130] "Watch failed" err="context canceled" E0131 23:30:02.738805 6788 retrywatcher.go:130] "Watch failed" err="context canceled" Jan 31 23:30:03.582: INFO: Pod "test-runtimeclass-runtimeclass-4351-preconfigured-handler-5wrj4": Phase="Pending", Reason="", readiness=false. Elapsed: 1m56.479561762s E0131 23:30:03.738878 6788 retrywatcher.go:130] "Watch failed" err="context canceled" E0131 23:30:04.739511 6788 retrywatcher.go:130] "Watch failed" err="context canceled" Jan 31 23:30:05.582: INFO: Pod "test-runtimeclass-runtimeclass-4351-preconfigured-handler-5wrj4": Phase="Pending", Reason="", readiness=false. Elapsed: 1m58.479241828s E0131 23:30:05.739641 6788 retrywatcher.go:130] "Watch failed" err="context canceled" E0131 23:30:06.740329 6788 retrywatcher.go:130] "Watch failed" err="context canceled" Jan 31 23:30:07.582: INFO: Pod "test-runtimeclass-runtimeclass-4351-preconfigured-handler-5wrj4": Phase="Pending", Reason="", readiness=false. Elapsed: 2m0.479703134s E0131 23:30:07.741214 6788 retrywatcher.go:130] "Watch failed" err="context canceled" E0131 23:30:08.741777 6788 retrywatcher.go:130] "Watch failed" err="context canceled" Jan 31 23:30:09.582: INFO: Pod "test-runtimeclass-runtimeclass-4351-preconfigured-handler-5wrj4": Phase="Pending", Reason="", readiness=false. Elapsed: 2m2.479546413s E0131 23:30:09.741898 6788 retrywatcher.go:130] "Watch failed" err="context canceled" E0131 23:30:10.742582 6788 retrywatcher.go:130] "Watch failed" err="context canceled" Jan 31 23:30:11.582: INFO: Pod "test-runtimeclass-runtimeclass-4351-preconfigured-handler-5wrj4": Phase="Pending", Reason="", readiness=false. Elapsed: 2m4.479297316s E0131 23:30:11.742727 6788 retrywatcher.go:130] "Watch failed" err="context canceled" E0131 23:30:12.743651 6788 retrywatcher.go:130] "Watch failed" err="context canceled" Jan 31 23:30:13.582: INFO: Pod "test-runtimeclass-runtimeclass-4351-preconfigured-handler-5wrj4": Phase="Pending", Reason="", readiness=false. Elapsed: 2m6.479152126s E0131 23:30:13.744547 6788 retrywatcher.go:130] "Watch failed" err="context canceled" E0131 23:30:14.745327 6788 retrywatcher.go:130] "Watch failed" err="context canceled" Jan 31 23:30:15.581: INFO: Pod "test-runtimeclass-runtimeclass-4351-preconfigured-handler-5wrj4": Phase="Pending", Reason="", readiness=false. Elapsed: 2m8.47867434s E0131 23:30:15.746378 6788 retrywatcher.go:130] "Watch failed" err="context canceled" E0131 23:30:16.747379 6788 retrywatcher.go:130] "Watch failed" err="context canceled" Jan 31 23:30:17.583: INFO: Pod "test-runtimeclass-runtimeclass-4351-preconfigured-handler-5wrj4": Phase="Pending", Reason="", readiness=false. Elapsed: 2m10.4801139s E0131 23:30:17.748515 6788 retrywatcher.go:130] "Watch failed" err="context canceled" E0131 23:30:18.749235 6788 retrywatcher.go:130] "Watch failed" err="context canceled" Jan 31 23:30:19.581: INFO: Pod "test-runtimeclass-runtimeclass-4351-preconfigured-handler-5wrj4": Phase="Pending", Reason="", readiness=false. Elapsed: 2m12.478768159s E0131 23:30:19.750340 6788 retrywatcher.go:130] "Watch failed" err="context canceled" E0131 23:30:20.750494 6788 retrywatcher.go:130] "Watch failed" err="context canceled" Jan 31 23:30:21.581: INFO: Pod "test-runtimeclass-runtimeclass-4351-preconfigured-handler-5wrj4": Phase="Pending", Reason="", readiness=false. Elapsed: 2m14.478784973s E0131 23:30:21.751397 6788 retrywatcher.go:130] "Watch failed" err="context canceled" E0131 23:30:22.752383 6788 retrywatcher.go:130] "Watch failed" err="context canceled" Jan 31 23:30:23.581: INFO: Pod "test-runtimeclass-runtimeclass-4351-preconfigured-handler-5wrj4": Phase="Pending", Reason="", readiness=false. Elapsed: 2m16.478521713s E0131 23:30:23.752925 6788 retrywatcher.go:130] "Watch failed" err="context canceled" E0131 23:30:24.753610 6788 retrywatcher.go:130] "Watch failed" err="context canceled" Jan 31 23:30:25.582: INFO: Pod "test-runtimeclass-runtimeclass-4351-preconfigured-handler-5wrj4": Phase="Pending", Reason="", readiness=false. Elapsed: 2m18.479135659s E0131 23:30:25.753809 6788 retrywatcher.go:130] "Watch failed" err="context canceled" E0131 23:30:26.754380 6788 retrywatcher.go:130] "Watch failed" err="context canceled" Jan 31 23:30:27.582: INFO: Pod "test-runtimeclass-runtimeclass-4351-preconfigured-handler-5wrj4": Phase="Pending", Reason="", readiness=false. Elapsed: 2m20.479101922s E0131 23:30:27.754536 6788 retrywatcher.go:130] "Watch failed" err="context canceled" E0131 23:30:28.755278 6788 retrywatcher.go:130] "Watch failed" err="context canceled" Jan 31 23:30:29.582: INFO: Pod "test-runtimeclass-runtimeclass-4351-preconfigured-handler-5wrj4": Phase="Pending", Reason="", readiness=false. Elapsed: 2m22.478997979s E0131 23:30:29.755357 6788 retrywatcher.go:130] "Watch failed" err="context canceled" E0131 23:30:30.755427 6788 retrywatcher.go:130] "Watch failed" err="context canceled" Jan 31 23:30:31.581: INFO: Pod "test-runtimeclass-runtimeclass-4351-preconfigured-handler-5wrj4": Phase="Pending", Reason="", readiness=false. Elapsed: 2m24.478656605s E0131 23:30:31.756105 6788 retrywatcher.go:130] "Watch failed" err="context canceled" E0131 23:30:32.757080 6788 retrywatcher.go:130] "Watch failed" err="context canceled" Jan 31 23:30:33.582: INFO: Pod "test-runtimeclass-runtimeclass-4351-preconfigured-handler-5wrj4": Phase="Pending", Reason="", readiness=false. Elapsed: 2m26.47961338s E0131 23:30:33.758116 6788 retrywatcher.go:130] "Watch failed" err="context canceled" E0131 23:30:34.758483 6788 retrywatcher.go:130] "Watch failed" err="context canceled" Jan 31 23:30:35.581: INFO: Pod "test-runtimeclass-runtimeclass-4351-preconfigured-handler-5wrj4": Phase="Pending", Reason="", readiness=false. Elapsed: 2m28.478806783s E0131 23:30:35.759339 6788 retrywatcher.go:130] "Watch failed" err="context canceled" E0131 23:30:36.759923 6788 retrywatcher.go:130] "Watch failed" err="context canceled" Jan 31 23:30:37.582: INFO: Pod "test-runtimeclass-runtimeclass-4351-preconfigured-handler-5wrj4": Phase="Pending", Reason="", readiness=false. Elapsed: 2m30.479112867s E0131 23:30:37.760494 6788 retrywatcher.go:130] "Watch failed" err="context canceled" E0131 23:30:38.761293 6788 retrywatcher.go:130] "Watch failed" err="context canceled" Jan 31 23:30:39.581: INFO: Pod "test-runtimeclass-runtimeclass-4351-preconfigured-handler-5wrj4": Phase="Pending", Reason="", readiness=false. Elapsed: 2m32.478598798s E0131 23:30:39.762159 6788 retrywatcher.go:130] "Watch failed" err="context canceled" E0131 23:30:40.762413 6788 retrywatcher.go:130] "Watch failed" err="context canceled" Jan 31 23:30:41.581: INFO: Pod "test-runtimeclass-runtimeclass-4351-preconfigured-handler-5wrj4": Phase="Pending", Reason="", readiness=false. Elapsed: 2m34.478846991s E0131 23:30:41.763395 6788 retrywatcher.go:130] "Watch failed" err="context canceled" E0131 23:30:42.764337 6788 retrywatcher.go:130] "Watch failed" err="context canceled" Jan 31 23:30:43.581: INFO: Pod "test-runtimeclass-runtimeclass-4351-preconfigured-handler-5wrj4": Phase="Pending", Reason="", readiness=false. Elapsed: 2m36.478803508s E0131 23:30:43.765266 6788 retrywatcher.go:130] "Watch failed" err="context canceled" E0131 23:30:44.765489 6788 retrywatcher.go:130] "Watch failed" err="context canceled" Jan 31 23:30:45.582: INFO: Pod "test-runtimeclass-runtimeclass-4351-preconfigured-handler-5wrj4": Phase="Pending", Reason="", readiness=false. Elapsed: 2m38.479209589s E0131 23:30:45.765611 6788 retrywatcher.go:130] "Watch failed" err="context canceled" E0131 23:30:46.766275 6788 retrywatcher.go:130] "Watch failed" err="context canceled" Jan 31 23:30:47.582: INFO: Pod "test-runtimeclass-runtimeclass-4351-preconfigured-handler-5wrj4": Phase="Pending", Reason="", readiness=false. Elapsed: 2m40.479344603s E0131 23:30:47.766891 6788 retrywatcher.go:130] "Watch failed" err="context canceled" E0131 23:30:48.767519 6788 retrywatcher.go:130] "Watch failed" err="context canceled" Jan 31 23:30:49.582: INFO: Pod "test-runtimeclass-runtimeclass-4351-preconfigured-handler-5wrj4": Phase="Pending", Reason="", readiness=false. Elapsed: 2m42.479187393s E0131 23:30:49.767642 6788 retrywatcher.go:130] "Watch failed" err="context canceled" E0131 23:30:50.768456 6788 retrywatcher.go:130] "Watch failed" err="context canceled" Jan 31 23:30:51.582: INFO: Pod "test-runtimeclass-runtimeclass-4351-preconfigured-handler-5wrj4": Phase="Pending", Reason="", readiness=false. Elapsed: 2m44.479857492s E0131 23:30:51.769426 6788 retrywatcher.go:130] "Watch failed" err="context canceled" E0131 23:30:52.770501 6788 retrywatcher.go:130] "Watch failed" err="context canceled" Jan 31 23:30:53.581: INFO: Pod "test-runtimeclass-runtimeclass-4351-preconfigured-handler-5wrj4": Phase="Pending", Reason="", readiness=false. Elapsed: 2m46.478819158s E0131 23:30:53.771042 6788 retrywatcher.go:130] "Watch failed" err="context canceled" E0131 23:30:54.771519 6788 retrywatcher.go:130] "Watch failed" err="context canceled" Jan 31 23:30:55.581: INFO: Pod "test-runtimeclass-runtimeclass-4351-preconfigured-handler-5wrj4": Phase="Pending", Reason="", readiness=false. Elapsed: 2m48.478811598s E0131 23:30:55.772290 6788 retrywatcher.go:130] "Watch failed" err="context canceled" E0131 23:30:56.772477 6788 retrywatcher.go:130] "Watch failed" err="context canceled" Jan 31 23:30:57.582: INFO: Pod "test-runtimeclass-runtimeclass-4351-preconfigured-handler-5wrj4": Phase="Pending", Reason="", readiness=false. Elapsed: 2m50.479782831s E0131 23:30:57.773220 6788 retrywatcher.go:130] "Watch failed" err="context canceled" E0131 23:30:58.773891 6788 retrywatcher.go:130] "Watch failed" err="context canceled" Jan 31 23:30:59.582: INFO: Pod "test-runtimeclass-runtimeclass-4351-preconfigured-handler-5wrj4": Phase="Pending", Reason="", readiness=false. Elapsed: 2m52.479862006s E0131 23:30:59.774385 6788 retrywatcher.go:130] "Watch failed" err="context canceled" E0131 23:31:00.775229 6788 retrywatcher.go:130] "Watch failed" err="context canceled" Jan 31 23:31:01.581: INFO: Pod "test-runtimeclass-runtimeclass-4351-preconfigured-handler-5wrj4": Phase="Pending", Reason="", readiness=false. Elapsed: 2m54.478793791s E0131 23:31:01.776393 6788 retrywatcher.go:130] "Watch failed" err="context canceled" E0131 23:31:02.777036 6788 retrywatcher.go:130] "Watch failed" err="context canceled" Jan 31 23:31:03.582: INFO: Pod "test-runtimeclass-runtimeclass-4351-preconfigured-handler-5wrj4": Phase="Pending", Reason="", readiness=false. Elapsed: 2m56.479673576s E0131 23:31:03.777206 6788 retrywatcher.go:130] "Watch failed" err="context canceled" E0131 23:31:04.777554 6788 retrywatcher.go:130] "Watch failed" err="context canceled" Jan 31 23:31:05.582: INFO: Pod "test-runtimeclass-runtimeclass-4351-preconfigured-handler-5wrj4": Phase="Pending", Reason="", readiness=false. Elapsed: 2m58.479188528s E0131 23:31:05.777711 6788 retrywatcher.go:130] "Watch failed" err="context canceled" E0131 23:31:06.778744 6788 retrywatcher.go:130] "Watch failed" err="context canceled" Jan 31 23:31:07.583: INFO: Pod "test-runtimeclass-runtimeclass-4351-preconfigured-handler-5wrj4": Phase="Pending", Reason="", readiness=false. Elapsed: 3m0.480136262s E0131 23:31:07.779745 6788 retrywatcher.go:130] "Watch failed" err="context canceled" E0131 23:31:08.780318 6788 retrywatcher.go:130] "Watch failed" err="context canceled" Jan 31 23:31:09.582: INFO: Pod "test-runtimeclass-runtimeclass-4351-preconfigured-handler-5wrj4": Phase="Pending", Reason="", readiness=false. Elapsed: 3m2.479083674s E0131 23:31:09.780396 6788 retrywatcher.go:130] "Watch failed" err="context canceled" E0131 23:31:10.780646 6788 retrywatcher.go:130] "Watch failed" err="context canceled" Jan 31 23:31:11.582: INFO: Pod "test-runtimeclass-runtimeclass-4351-preconfigured-handler-5wrj4": Phase="Pending", Reason="", readiness=false. Elapsed: 3m4.479706938s E0131 23:31:11.781266 6788 retrywatcher.go:130] "Watch failed" err="context canceled" E0131 23:31:12.782294 6788 retrywatcher.go:130] "Watch failed" err="context canceled" Jan 31 23:31:13.582: INFO: Pod "test-runtimeclass-runtimeclass-4351-preconfigured-handler-5wrj4": Phase="Pending", Reason="", readiness=false. Elapsed: 3m6.479067786s E0131 23:31:13.782749 6788 retrywatcher.go:130] "Watch failed" err="context canceled" E0131 23:31:14.783708 6788 retrywatcher.go:130] "Watch failed" err="context canceled" Jan 31 23:31:15.582: INFO: Pod "test-runtimeclass-runtimeclass-4351-preconfigured-handler-5wrj4": Phase="Pending", Reason="", readiness=false. Elapsed: 3m8.479200013s E0131 23:31:15.784792 6788 retrywatcher.go:130] "Watch failed" err="context canceled" E0131 23:31:16.785800 6788 retrywatcher.go:130] "Watch failed" err="context canceled" Jan 31 23:31:17.582: INFO: Pod "test-runtimeclass-runtimeclass-4351-preconfigured-handler-5wrj4": Phase="Pending", Reason="", readiness=false. Elapsed: 3m10.479902087s E0131 23:31:17.786179 6788 retrywatcher.go:130] "Watch failed" err="context canceled" E0131 23:31:18.787034 6788 retrywatcher.go:130] "Watch failed" err="context canceled" Jan 31 23:31:19.582: INFO: Pod "test-runtimeclass-runtimeclass-4351-preconfigured-handler-5wrj4": Phase="Pending", Reason="", readiness=false. Elapsed: 3m12.479827349s E0131 23:31:19.787314 6788 retrywatcher.go:130] "Watch failed" err="context canceled" E0131 23:31:20.787855 6788 retrywatcher.go:130] "Watch failed" err="context canceled" Jan 31 23:31:21.582: INFO: Pod "test-runtimeclass-runtimeclass-4351-preconfigured-handler-5wrj4": Phase="Pending", Reason="", readiness=false. Elapsed: 3m14.479664851s E0131 23:31:21.788041 6788 retrywatcher.go:130] "Watch failed" err="context canceled" E0131 23:31:22.788752 6788 retrywatcher.go:130] "Watch failed" err="context canceled" Jan 31 23:31:23.582: INFO: Pod "test-runtimeclass-runtimeclass-4351-preconfigured-handler-5wrj4": Phase="Pending", Reason="", readiness=false. Elapsed: 3m16.479567118s E0131 23:31:23.788947 6788 retrywatcher.go:130] "Watch failed" err="context canceled" E0131 23:31:24.789566 6788 retrywatcher.go:130] "Watch failed" err="context canceled" Jan 31 23:31:25.582: INFO: Pod "test-runtimeclass-runtimeclass-4351-preconfigured-handler-5wrj4": Phase="Pending", Reason="", readiness=false. Elapsed: 3m18.479433746s E0131 23:31:25.789740 6788 retrywatcher.go:130] "Watch failed" err="context canceled" E0131 23:31:26.790437 6788 retrywatcher.go:130] "Watch failed" err="context canceled" Jan 31 23:31:27.582: INFO: Pod "test-runtimeclass-runtimeclass-4351-preconfigured-handler-5wrj4": Phase="Pending", Reason="", readiness=false. Elapsed: 3m20.479138645s E0131 23:31:27.790658 6788 retrywatcher.go:130] "Watch failed" err="context canceled" E0131 23:31:28.791629 6788 retrywatcher.go:130] "Watch failed" err="context canceled" Jan 31 23:31:29.582: INFO: Pod "test-runtimeclass-runtimeclass-4351-preconfigured-handler-5wrj4": Phase="Pending", Reason="", readiness=false. Elapsed: 3m22.479452879s E0131 23:31:29.791942 6788 retrywatcher.go:130] "Watch failed" err="context canceled" E0131 23:31:30.791961 6788 retrywatcher.go:130] "Watch failed" err="context canceled" Jan 31 23:31:31.581: INFO: Pod "test-runtimeclass-runtimeclass-4351-preconfigured-handler-5wrj4": Phase="Pending", Reason="", readiness=false. Elapsed: 3m24.478895525s E0131 23:31:31.792430 6788 retrywatcher.go:130] "Watch failed" err="context canceled" E0131 23:31:32.792625 6788 retrywatcher.go:130] "Watch failed" err="context canceled" Jan 31 23:31:33.582: INFO: Pod "test-runtimeclass-runtimeclass-4351-preconfigured-handler-5wrj4": Phase="Pending", Reason="", readiness=false. Elapsed: 3m26.479463703s E0131 23:31:33.792769 6788 retrywatcher.go:130] "Watch failed" err="context canceled" E0131 23:31:34.793350 6788 retrywatcher.go:130] "Watch failed" err="context canceled" Jan 31 23:31:35.582: INFO: Pod "test-runtimeclass-runtimeclass-4351-preconfigured-handler-5wrj4": Phase="Pending", Reason="", readiness=false. Elapsed: 3m28.479020004s E0131 23:31:35.793487 6788 retrywatcher.go:130] "Watch failed" err="context canceled" E0131 23:31:36.794222 6788 retrywatcher.go:130] "Watch failed" err="context canceled" Jan 31 23:31:37.582: INFO: Pod "test-runtimeclass-runtimeclass-4351-preconfigured-handler-5wrj4": Phase="Pending", Reason="", readiness=false. Elapsed: 3m30.479399497s E0131 23:31:37.794897 6788 retrywatcher.go:130] "Watch failed" err="context canceled" E0131 23:31:38.795899 6788 retrywatcher.go:130] "Watch failed" err="context canceled" Jan 31 23:31:39.581: INFO: Pod "test-runtimeclass-runtimeclass-4351-preconfigured-handler-5wrj4": Phase="Pending", Reason="", readiness=false. Elapsed: 3m32.478803802s E0131 23:31:39.796435 6788 retrywatcher.go:130] "Watch failed" err="context canceled" E0131 23:31:40.797444 6788 retrywatcher.go:130] "Watch failed" err="context canceled" Jan 31 23:31:41.582: INFO: Pod "test-runtimeclass-runtimeclass-4351-preconfigured-handler-5wrj4": Phase="Pending", Reason="", readiness=false. Elapsed: 3m34.479187171s E0131 23:31:41.797636 6788 retrywatcher.go:130] "Watch failed" err="context canceled" E0131 23:31:42.798500 6788 retrywatcher.go:130] "Watch failed" err="context canceled" Jan 31 23:31:43.581: INFO: Pod "test-runtimeclass-runtimeclass-4351-preconfigured-handler-5wrj4": Phase="Pending", Reason="", readiness=false. Elapsed: 3m36.4787359s E0131 23:31:43.799322 6788 retrywatcher.go:130] "Watch failed" err="context canceled" E0131 23:31:44.799568 6788 retrywatcher.go:130] "Watch failed" err="context canceled" Jan 31 23:31:45.582: INFO: Pod "test-runtimeclass-runtimeclass-4351-preconfigured-handler-5wrj4": Phase="Pending", Reason="", readiness=false. Elapsed: 3m38.479292192s E0131 23:31:45.799774 6788 retrywatcher.go:130] "Watch failed" err="context canceled" E0131 23:31:46.800479 6788 retrywatcher.go:130] "Watch failed" err="context canceled" Jan 31 23:31:47.583: INFO: Pod "test-runtimeclass-runtimeclass-4351-preconfigured-handler-5wrj4": Phase="Pending", Reason="", readiness=false. Elapsed: 3m40.480143059s E0131 23:31:47.800561 6788 retrywatcher.go:130] "Watch failed" err="context canceled" E0131 23:31:48.801291 6788 retrywatcher.go:130] "Watch failed" err="context canceled" Jan 31 23:31:49.582: INFO: Pod "test-runtimeclass-runtimeclass-4351-preconfigured-handler-5wrj4": Phase="Pending", Reason="", readiness=false. Elapsed: 3m42.47891949s E0131 23:31:49.801392 6788 retrywatcher.go:130] "Watch failed" err="context canceled" E0131 23:31:50.802480 6788 retrywatcher.go:130] "Watch failed" err="context canceled" Jan 31 23:31:51.582: INFO: Pod "test-runtimeclass-runtimeclass-4351-preconfigured-handler-5wrj4": Phase="Pending", Reason="", readiness=false. Elapsed: 3m44.479262106s E0131 23:31:51.802680 6788 retrywatcher.go:130] "Watch failed" err="context canceled" E0131 23:31:52.803661 6788 retrywatcher.go:130] "Watch failed" err="context canceled" Jan 31 23:31:53.582: INFO: Pod "test-runtimeclass-runtimeclass-4351-preconfigured-handler-5wrj4": Phase="Pending", Reason="", readiness=false. Elapsed: 3m46.479276323s E0131 23:31:53.804749 6788 retrywatcher.go:130] "Watch failed" err="context canceled" E0131 23:31:54.804923 6788 retrywatcher.go:130] "Watch failed" err="context canceled" Jan 31 23:31:55.582: INFO: Pod "test-runtimeclass-runtimeclass-4351-preconfigured-handler-5wrj4": Phase="Pending", Reason="", readiness=false. Elapsed: 3m48.479617366s E0131 23:31:55.805051 6788 retrywatcher.go:130] "Watch failed" err="context canceled" E0131 23:31:56.805796 6788 retrywatcher.go:130] "Watch failed" err="context canceled" Jan 31 23:31:57.582: INFO: Pod "test-runtimeclass-runtimeclass-4351-preconfigured-handler-5wrj4": Phase="Pending", Reason="", readiness=false. Elapsed: 3m50.479309166s E0131 23:31:57.806692 6788 retrywatcher.go:130] "Watch failed" err="context canceled" E0131 23:31:58.807411 6788 retrywatcher.go:130] "Watch failed" err="context canceled" Jan 31 23:31:59.582: INFO: Pod "test-runtimeclass-runtimeclass-4351-preconfigured-handler-5wrj4": Phase="Pending", Reason="", readiness=false. Elapsed: 3m52.478917314s E0131 23:31:59.808518 6788 retrywatcher.go:130] "Watch failed" err="context canceled" E0131 23:32:00.809268 6788 retrywatcher.go:130] "Watch failed" err="context canceled" Jan 31 23:32:01.582: INFO: Pod "test-runtimeclass-runtimeclass-4351-preconfigured-handler-5wrj4": Phase="Pending", Reason="", readiness=false. Elapsed: 3m54.478972381s E0131 23:32:01.809374 6788 retrywatcher.go:130] "Watch failed" err="context canceled" E0131 23:32:02.809562 6788 retrywatcher.go:130] "Watch failed" err="context canceled" Jan 31 23:32:03.582: INFO: Pod "test-runtimeclass-runtimeclass-4351-preconfigured-handler-5wrj4": Phase="Pending", Reason="", readiness=false. Elapsed: 3m56.479362046s E0131 23:32:03.809774 6788 retrywatcher.go:130] "Watch failed" err="context canceled" E0131 23:32:04.810145 6788 retrywatcher.go:130] "Watch failed" err="context canceled" Jan 31 23:32:05.581: INFO: Pod "test-runtimeclass-runtimeclass-4351-preconfigured-handler-5wrj4": Phase="Pending", Reason="", readiness=false. Elapsed: 3m58.478869136s E0131 23:32:05.810344 6788 retrywatcher.go:130] "Watch failed" err="context canceled" E0131 23:32:06.811072 6788 retrywatcher.go:130] "Watch failed" err="context canceled" Jan 31 23:32:07.582: INFO: Pod "test-runtimeclass-runtimeclass-4351-preconfigured-handler-5wrj4": Phase="Pending", Reason="", readiness=false. Elapsed: 4m0.479170246s E0131 23:32:07.811817 6788 retrywatcher.go:130] "Watch failed" err="context canceled" E0131 23:32:08.812493 6788 retrywatcher.go:130] "Watch failed" err="context canceled" Jan 31 23:32:09.582: INFO: Pod "test-runtimeclass-runtimeclass-4351-preconfigured-handler-5wrj4": Phase="Pending", Reason="", readiness=false. Elapsed: 4m2.479250282s E0131 23:32:09.812989 6788 retrywatcher.go:130] "Watch failed" err="context canceled" E0131 23:32:10.813683 6788 retrywatcher.go:130] "Watch failed" err="context canceled" Jan 31 23:32:11.582: INFO: Pod "test-runtimeclass-runtimeclass-4351-preconfigured-handler-5wrj4": Phase="Pending", Reason="", readiness=false. Elapsed: 4m4.479115854s E0131 23:32:11.814719 6788 retrywatcher.go:130] "Watch failed" err="context canceled" E0131 23:32:12.815722 6788 retrywatcher.go:130] "Watch failed" err="context canceled" Jan 31 23:32:13.582: INFO: Pod "test-runtimeclass-runtimeclass-4351-preconfigured-handler-5wrj4": Phase="Pending", Reason="", readiness=false. Elapsed: 4m6.479405877s E0131 23:32:13.815808 6788 retrywatcher.go:130] "Watch failed" err="context canceled" E0131 23:32:14.816503 6788 retrywatcher.go:130] "Watch failed" err="context canceled" Jan 31 23:32:15.582: INFO: Pod "test-runtimeclass-runtimeclass-4351-preconfigured-handler-5wrj4": Phase="Pending", Reason="", readiness=false. Elapsed: 4m8.479274055s E0131 23:32:15.816744 6788 retrywatcher.go:130] "Watch failed" err="context canceled" E0131 23:32:16.817341 6788 retrywatcher.go:130] "Watch failed" err="context canceled" Jan 31 23:32:17.594: INFO: Pod "test-runtimeclass-runtimeclass-4351-preconfigured-handler-5wrj4": Phase="Pending", Reason="", readiness=false. Elapsed: 4m10.491376478s E0131 23:32:17.817986 6788 retrywatcher.go:130] "Watch failed" err="context canceled" E0131 23:32:18.818725 6788 retrywatcher.go:130] "Watch failed" err="context canceled" Jan 31 23:32:19.612: INFO: Pod "test-runtimeclass-runtimeclass-4351-preconfigured-handler-5wrj4": Phase="Pending", Reason="", readiness=false. Elapsed: 4m12.509459135s E0131 23:32:19.819014 6788 retrywatcher.go:130] "Watch failed" err="context canceled" E0131 23:32:20.819834 6788 retrywatcher.go:130] "Watch failed" err="context canceled" Jan 31 23:32:21.583: INFO: Pod "test-runtimeclass-runtimeclass-4351-preconfigured-handler-5wrj4": Phase="Pending", Reason="", readiness=false. Elapsed: 4m14.480028977s E0131 23:32:21.820452 6788 retrywatcher.go:130] "Watch failed" err="context canceled" E0131 23:32:22.820744 6788 retrywatcher.go:130] "Watch failed" err="context canceled" Jan 31 23:32:23.582: INFO: Pod "test-runtimeclass-runtimeclass-4351-preconfigured-handler-5wrj4": Phase="Pending", Reason="", readiness=false. Elapsed: 4m16.479266646s E0131 23:32:23.820739 6788 retrywatcher.go:130] "Watch failed" err="context canceled" E0131 23:32:24.821280 6788 retrywatcher.go:130] "Watch failed" err="context canceled" Jan 31 23:32:25.582: INFO: Pod "test-runtimeclass-runtimeclass-4351-preconfigured-handler-5wrj4": Phase="Pending", Reason="", readiness=false. Elapsed: 4m18.479735876s E0131 23:32:25.822221 6788 retrywatcher.go:130] "Watch failed" err="context canceled" E0131 23:32:26.822916 6788 retrywatcher.go:130] "Watch failed" err="context canceled" Jan 31 23:32:27.582: INFO: Pod "test-runtimeclass-runtimeclass-4351-preconfigured-handler-5wrj4": Phase="Pending", Reason="", readiness=false. Elapsed: 4m20.479018501s E0131 23:32:27.823633 6788 retrywatcher.go:130] "Watch failed" err="context canceled" E0131 23:32:28.824476 6788 retrywatcher.go:130] "Watch failed" err="context canceled" Jan 31 23:32:29.582: INFO: Pod "test-runtimeclass-runtimeclass-4351-preconfigured-handler-5wrj4": Phase="Pending", Reason="", readiness=false. Elapsed: 4m22.47916175s E0131 23:32:29.824654 6788 retrywatcher.go:130] "Watch failed" err="context canceled" E0131 23:32:30.825460 6788 retrywatcher.go:130] "Watch failed" err="context canceled" Jan 31 23:32:31.582: INFO: Pod "test-runtimeclass-runtimeclass-4351-preconfigured-handler-5wrj4": Phase="Pending", Reason="", readiness=false. Elapsed: 4m24.479225844s E0131 23:32:31.825682 6788 retrywatcher.go:130] "Watch failed" err="context canceled" E0131 23:32:32.825763 6788 retrywatcher.go:130] "Watch failed" err="context canceled" Jan 31 23:32:33.582: INFO: Pod "test-runtimeclass-runtimeclass-4351-preconfigured-handler-5wrj4": Phase="Pending", Reason="", readiness=false. Elapsed: 4m26.479203618s E0131 23:32:33.826682 6788 retrywatcher.go:130] "Watch failed" err="context canceled" E0131 23:32:34.827445 6788 retrywatcher.go:130] "Watch failed" err="context canceled" Jan 31 23:32:35.581: INFO: Pod "test-runtimeclass-runtimeclass-4351-preconfigured-handler-5wrj4": Phase="Pending", Reason="", readiness=false. Elapsed: 4m28.478887679s E0131 23:32:35.828459 6788 retrywatcher.go:130] "Watch failed" err="context canceled" E0131 23:32:36.828723 6788 retrywatcher.go:130] "Watch failed" err="context canceled" Jan 31 23:32:37.582: INFO: Pod "test-runtimeclass-runtimeclass-4351-preconfigured-handler-5wrj4": Phase="Pending", Reason="", readiness=false. Elapsed: 4m30.479297686s E0131 23:32:37.829731 6788 retrywatcher.go:130] "Watch failed" err="context canceled" E0131 23:32:38.830684 6788 retrywatcher.go:130] "Watch failed" err="context canceled" Jan 31 23:32:39.582: INFO: Pod "test-runtimeclass-runtimeclass-4351-preconfigured-handler-5wrj4": Phase="Pending", Reason="", readiness=false. Elapsed: 4m32.479372246s E0131 23:32:39.830918 6788 retrywatcher.go:130] "Watch failed" err="context canceled" E0131 23:32:40.831868 6788 retrywatcher.go:130] "Watch failed" err="context canceled" Jan 31 23:32:41.582: INFO: Pod "test-runtimeclass-runtimeclass-4351-preconfigured-handler-5wrj4": Phase="Pending", Reason="", readiness=false. Elapsed: 4m34.479394264s E0131 23:32:41.831950 6788 retrywatcher.go:130] "Watch failed" err="context canceled" E0131 23:32:42.832736 6788 retrywatcher.go:130] "Watch failed" err="context canceled" Jan 31 23:32:43.582: INFO: Pod "test-runtimeclass-runtimeclass-4351-preconfigured-handler-5wrj4": Phase="Pending", Reason="", readiness=false. Elapsed: 4m36.479551447s E0131 23:32:43.833272 6788 retrywatcher.go:130] "Watch failed" err="context canceled" E0131 23:32:44.833952 6788 retrywatcher.go:130] "Watch failed" err="context canceled" Jan 31 23:32:45.582: INFO: Pod "test-runtimeclass-runtimeclass-4351-preconfigured-handler-5wrj4": Phase="Pending", Reason="", readiness=false. Elapsed: 4m38.479610575s E0131 23:32:45.834024 6788 retrywatcher.go:130] "Watch failed" err="context canceled" E0131 23:32:46.834762 6788 retrywatcher.go:130] "Watch failed" err="context canceled" Jan 31 23:32:47.582: INFO: Pod "test-runtimeclass-runtimeclass-4351-preconfigured-handler-5wrj4": Phase="Pending", Reason="", readiness=false. Elapsed: 4m40.478992038s E0131 23:32:47.835541 6788 retrywatcher.go:130] "Watch failed" err="context canceled" E0131 23:32:48.836518 6788 retrywatcher.go:130] "Watch failed" err="context canceled" Jan 31 23:32:49.582: INFO: Pod "test-runtimeclass-runtimeclass-4351-preconfigured-handler-5wrj4": Phase="Pending", Reason="", readiness=false. Elapsed: 4m42.479418343s E0131 23:32:49.836790 6788 retrywatcher.go:130] "Watch failed" err="context canceled" E0131 23:32:50.837695 6788 retrywatcher.go:130] "Watch failed" err="context canceled" Jan 31 23:32:51.582: INFO: Pod "test-runtimeclass-runtimeclass-4351-preconfigured-handler-5wrj4": Phase="Pending", Reason="", readiness=false. Elapsed: 4m44.479372439s E0131 23:32:51.837843 6788 retrywatcher.go:130] "Watch failed" err="context canceled" E0131 23:32:52.838820 6788 retrywatcher.go:130] "Watch failed" err="context canceled" Jan 31 23:32:53.582: INFO: Pod "test-runtimeclass-runtimeclass-4351-preconfigured-handler-5wrj4": Phase="Pending", Reason="", readiness=false. Elapsed: 4m46.479827189s E0131 23:32:53.839351 6788 retrywatcher.go:130] "Watch failed" err="context canceled" E0131 23:32:54.839614 6788 retrywatcher.go:130] "Watch failed" err="context canceled" Jan 31 23:32:55.582: INFO: Pod "test-runtimeclass-runtimeclass-4351-preconfigured-handler-5wrj4": Phase="Pending", Reason="", readiness=false. Elapsed: 4m48.479242164s E0131 23:32:55.839716 6788 retrywatcher.go:130] "Watch failed" err="context canceled" E0131 23:32:56.840526 6788 retrywatcher.go:130] "Watch failed" err="context canceled" Jan 31 23:32:57.584: INFO: Pod "test-runtimeclass-runtimeclass-4351-preconfigured-handler-5wrj4": Phase="Pending", Reason="", readiness=false. Elapsed: 4m50.481623474s E0131 23:32:57.841176 6788 retrywatcher.go:130] "Watch failed" err="context canceled" E0131 23:32:58.841350 6788 retrywatcher.go:130] "Watch failed" err="context canceled" Jan 31 23:32:59.582: INFO: Pod "test-runtimeclass-runtimeclass-4351-preconfigured-handler-5wrj4": Phase="Pending", Reason="", readiness=false. Elapsed: 4m52.479620697s E0131 23:32:59.842189 6788 retrywatcher.go:130] "Watch failed" err="context canceled" E0131 23:33:00.842753 6788 retrywatcher.go:130] "Watch failed" err="context canceled" Jan 31 23:33:01.582: INFO: Pod "test-runtimeclass-runtimeclass-4351-preconfigured-handler-5wrj4": Phase="Pending", Reason="", readiness=false. Elapsed: 4m54.479526538s E0131 23:33:01.842923 6788 retrywatcher.go:130] "Watch failed" err="context canceled" E0131 23:33:02.843034 6788 retrywatcher.go:130] "Watch failed" err="context canceled" Jan 31 23:33:03.582: INFO: Pod "test-runtimeclass-runtimeclass-4351-preconfigured-handler-5wrj4": Phase="Pending", Reason="", readiness=false. Elapsed: 4m56.479471948s E0131 23:33:03.844030 6788 retrywatcher.go:130] "Watch failed" err="context canceled" E0131 23:33:04.844832 6788 retrywatcher.go:130] "Watch failed" err="context canceled" Jan 31 23:33:05.582: INFO: Pod "test-runtimeclass-runtimeclass-4351-preconfigured-handler-5wrj4": Phase="Pending", Reason="", readiness=false. Elapsed: 4m58.479317205s E0131 23:33:05.844920 6788 retrywatcher.go:130] "Watch failed" err="context canceled" E0131 23:33:06.845910 6788 retrywatcher.go:130] "Watch failed" err="context canceled" Jan 31 23:33:07.581: INFO: Pod "test-runtimeclass-runtimeclass-4351-preconfigured-handler-5wrj4": Phase="Pending", Reason="", readiness=false. Elapsed: 5m0.478699852s Jan 31 23:33:07.821: INFO: Pod "test-runtimeclass-runtimeclass-4351-preconfigured-handler-5wrj4": Phase="Pending", Reason="", readiness=false. Elapsed: 5m0.718674828s Jan 31 23:33:07.822: INFO: Unexpected error: <*pod.timeoutError | 0xc003ec17a0>: { msg: "timed out while waiting for pod runtimeclass-4351/test-runtimeclass-runtimeclass-4351-preconfigured-handler-5wrj4 to be Succeeded or Failed", observedObjects: [ <*v1.Pod | 0xc00519c900>{ TypeMeta: {Kind: "", APIVersion: ""}, ObjectMeta: { Name: "test-runtimeclass-runtimeclass-4351-preconfigured-handler-5wrj4", GenerateName: "test-runtimeclass-runtimeclass-4351-preconfigured-handler-", Namespace: "runtimeclass-4351", SelfLink: "", UID: "9e45f66e-a082-4e1e-a132-6f86baa1f7b9", ResourceVersion: "49510", Generation: 0, CreationTimestamp: { Time: { wall: 0, ext: 63810804486, loc: { name: "Local", zone: [ {name: "UTC", offset: 0, isDST: false}, ], tx: [ { when: -9223372036854775808, index: 0, isstd: false, isutc: false, }, ], extend: "UTC0", cacheStart: 9223372036854775807, cacheEnd: 9223372036854775807, cacheZone: {name: "UTC", offset: 0, isDST: false}, }, }, }, DeletionTimestamp: nil, DeletionGracePeriodSeconds: nil, Labels: nil, Annotations: nil, OwnerReferences: nil, Finalizers: nil, ManagedFields: [ { Manager: "e2e.test", Operation: "Update", APIVersion: "v1", Time: { Time: { wall: 0, ext: 63810804486, loc: { name: "Local", zone: [...], tx: [...], extend: "UTC0", cacheStart: 9223372036854775807, cacheEnd: 9223372036854775807, cacheZone: {name: ..., offset: ..., isDST: ...}, }, }, }, FieldsType: "FieldsV1", FieldsV1: { Raw: "{\"f:metadata\":{\"f:generateName\":{}},\"f:spec\":{\"f:automountServiceAccountToken\":{},\"f:containers\":{\"k:{\\\"name\\\":\\\"test\\\"}\":{\".\":{},\"f:command\":{},\"f:image\":{},\"f:imagePullPolicy\":{},\"f:name\":{},\"f:resources\":{},\"f:terminationMessagePath\":{},\"f:terminationMessagePolicy\":{}}},\"f:dnsPolicy\":{},\"f:enableServiceLinks\":{},\"f:restartPolicy\":{},\"f:runtimeClassName\":{},\"f:schedulerName\":{},\"f:securityContext\":{},\"f:terminationGracePeriodSeconds\":{}}}", }, Subresource: "", }, { Manager: "kubelet", Operation: "Update", APIVersion: "v1", Time: { Time: { wall: 0, ext: 63810804487, loc: { name: "Local", zone: [...], tx: [...], extend: "UTC0", cacheStart: 9223372036854775807, cacheEnd: 9223372036854775807, cacheZone: {name: ..., offset: ..., isDST: ...}, }, }, }, FieldsType: "FieldsV1", FieldsV1: { Raw: "{\"f:status\":{\"f:conditions\":{\"k:{\\\"type\\\":\\\"ContainersReady\\\"}\":{\".\":{},\"f:lastProbeTime\":{},\"f:lastTransitionTime\":{},\"f:message\":{},\"f:reason\":{},\"f:status\":{},\"f:type\":{}},\"k:{\\\"type\\\":\\\"Initialized\\\"}\":{\".\":{},\"f:lastProbeTime\":{},\"f:lastTransitionTime\":{},\"f:status\":{},\"f:type\":{}},\"k:{\\\"type\\\":\\\"Ready\\\"}\":{\".\":{},\"f:lastProbeTime\":{},\"f:lastTransitionTime\":{},\"f:message\":{},\"f:reason\":{},\"f:status\":{},\"f:type\":{}}},\"f:containerStatuses\":{},\"f:hostIP\":{},\"f:startTime\":{}}}", }, Subresource: "status", }, ], }, Spec: { Volumes: nil, InitContainers: nil, Containers: [ { Name: "test", Image: "registry.k8s.io/e2e-test-images/busybox:1.29-4", Command: ["true"], Args: nil, WorkingDir: "", Ports: nil, EnvFrom: nil, Env: nil, Resources: {Limits: nil, Requests: nil, Claims: nil}, VolumeMounts: nil, VolumeDevices: nil, LivenessProbe: nil, ReadinessProbe: nil, StartupProbe: nil, Lifecycle: nil, TerminationMessagePath: "/dev/termination-log", TerminationMessagePolicy: "File", ImagePullPolicy: "IfNotPresent", SecurityContext: nil, Stdin: false, StdinOnce: false, TTY: false, }, ], EphemeralContainers: nil, RestartPolicy: "Never", TerminationGracePeriodSeconds: 30, ActiveDeadlineSeconds: nil, DNSPolicy: "ClusterFirst", NodeSelector: nil, ServiceAccountName: "default", DeprecatedServiceAccount: "default", AutomountServiceAccountToken: false, NodeName: "i-0b2bb17140492f5c5", HostNetwork: false, HostPID: false, HostIPC: false, ShareProcessNamespace: nil, SecurityContext: { SELinuxOptions: nil, WindowsOptions: nil, RunAsUser: nil, RunAsGroup: nil, RunAsNonRoot: nil, SupplementalGroups: nil, FSGroup: nil, Sysctls: nil, FSGroupChangePolicy: nil, ... Gomega truncated this representation as it exceeds 'format.MaxLength'. Consider having the object provide a custom 'GomegaStringer' representation or adjust the parameters in Gomega's 'format' package. Learn more here: https://onsi.github.io/gomega/#adjusting-output [FAILED] timed out while waiting for pod runtimeclass-4351/test-runtimeclass-runtimeclass-4351-preconfigured-handler-5wrj4 to be Succeeded or Failed In [It] at: test/e2e/common/node/runtimeclass.go:386 @ 01/31/23 23:33:07.822 < Exit [It] should run a Pod requesting a RuntimeClass with a configured handler [NodeFeature:RuntimeHandler] - test/e2e/common/node/runtimeclass.go:85 @ 01/31/23 23:33:07.822 (5m5.692s) > Enter [AfterEach] [sig-node] RuntimeClass - test/e2e/framework/node/init/init.go:33 @ 01/31/23 23:33:07.822 Jan 31 23:33:07.823: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready E0131 23:33:07.846340 6788 retrywatcher.go:130] "Watch failed" err="context canceled" < Exit [AfterEach] [sig-node] RuntimeClass - test/e2e/framework/node/init/init.go:33 @ 01/31/23 23:33:08.063 (240ms) > Enter [DeferCleanup (Each)] [sig-node] RuntimeClass - test/e2e/common/node/runtimeclass.go:91 @ 01/31/23 23:33:08.063 < Exit [DeferCleanup (Each)] [sig-node] RuntimeClass - test/e2e/common/node/runtimeclass.go:91 @ 01/31/23 23:33:08.305 (242ms) > Enter [DeferCleanup (Each)] [sig-node] RuntimeClass - test/e2e/framework/node/runtimeclass/runtimeclass.go:64 @ 01/31/23 23:33:08.305 STEP: Deleting pod hostexec-i-0f27661869b93b9cf-6jhph in namespace runtimeclass-4351 - test/e2e/framework/pod/delete.go:41 @ 01/31/23 23:33:08.305 < Exit [DeferCleanup (Each)] [sig-node] RuntimeClass - test/e2e/framework/node/runtimeclass/runtimeclass.go:64 @ 01/31/23 23:33:08.55 (246ms) > Enter [DeferCleanup (Each)] [sig-node] RuntimeClass - test/e2e/framework/metrics/init/init.go:35 @ 01/31/23 23:33:08.55 < Exit [DeferCleanup (Each)] [sig-node] RuntimeClass - test/e2e/framework/metrics/init/init.go:35 @ 01/31/23 23:33:08.55 (0s) > Enter [DeferCleanup (Each)] [sig-node] RuntimeClass - dump namespaces | framework.go:206 @ 01/31/23 23:33:08.55 STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/31/23 23:33:08.55 STEP: Collecting events from namespace "runtimeclass-4351". - test/e2e/framework/debug/dump.go:42 @ 01/31/23 23:33:08.55 STEP: Found 7 events. - test/e2e/framework/debug/dump.go:46 @ 01/31/23 23:33:08.79 Jan 31 23:33:08.790: INFO: At 2023-01-31 23:28:02 +0000 UTC - event for hostexec-i-0f27661869b93b9cf-6jhph: {default-scheduler } Scheduled: Successfully assigned runtimeclass-4351/hostexec-i-0f27661869b93b9cf-6jhph to i-0f27661869b93b9cf Jan 31 23:33:08.790: INFO: At 2023-01-31 23:28:02 +0000 UTC - event for hostexec-i-0f27661869b93b9cf-6jhph: {kubelet i-0f27661869b93b9cf} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.43" already present on machine Jan 31 23:33:08.790: INFO: At 2023-01-31 23:28:02 +0000 UTC - event for hostexec-i-0f27661869b93b9cf-6jhph: {kubelet i-0f27661869b93b9cf} Created: Created container agnhost-container Jan 31 23:33:08.790: INFO: At 2023-01-31 23:28:03 +0000 UTC - event for hostexec-i-0f27661869b93b9cf-6jhph: {kubelet i-0f27661869b93b9cf} Started: Started container agnhost-container Jan 31 23:33:08.790: INFO: At 2023-01-31 23:28:06 +0000 UTC - event for test-runtimeclass-runtimeclass-4351-preconfigured-handler-5wrj4: {default-scheduler } Scheduled: Successfully assigned runtimeclass-4351/test-runtimeclass-runtimeclass-4351-preconfigured-handler-5wrj4 to i-0b2bb17140492f5c5 Jan 31 23:33:08.790: INFO: At 2023-01-31 23:28:07 +0000 UTC - event for test-runtimeclass-runtimeclass-4351-preconfigured-handler-5wrj4: {kubelet i-0b2bb17140492f5c5} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to get sandbox runtime: no runtime for "test-handler" is configured Jan 31 23:33:08.790: INFO: At 2023-01-31 23:33:08 +0000 UTC - event for hostexec-i-0f27661869b93b9cf-6jhph: {kubelet i-0f27661869b93b9cf} Killing: Stopping container agnhost-container E0131 23:33:08.846733 6788 retrywatcher.go:130] "Watch failed" err="context canceled" Jan 31 23:33:09.029: INFO: POD NODE PHASE GRACE CONDITIONS Jan 31 23:33:09.029: INFO: test-runtimeclass-runtimeclass-4351-preconfigured-handler-5wrj4 i-0b2bb17140492f5c5 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 23:28:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-31 23:28:06 +0000 UTC ContainersNotReady containers with unready status: [test]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-31 23:28:06 +0000 UTC ContainersNotReady containers with unready status: [test]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 23:28:06 +0000 UTC }] Jan 31 23:33:09.029: INFO: Jan 31 23:33:09.280: INFO: Unable to fetch runtimeclass-4351/test-runtimeclass-runtimeclass-4351-preconfigured-handler-5wrj4/test logs: the server rejected our request for an unknown reason (get pods test-runtimeclass-runtimeclass-4351-preconfigured-handler-5wrj4) Jan 31 23:33:09.520: INFO: Logging node info for node i-01f2014d35b8c12f6 Jan 31 23:33:09.759: INFO: Node Info: &Node{ObjectMeta:{i-01f2014d35b8c12f6 3ed71f02-c5f9-4792-b864-3ebf0393fb20 51359 0 2023-01-31 23:02:06 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:arm64 beta.kubernetes.io/instance-type:c6g.large beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:ap-south-1 failure-domain.beta.kubernetes.io/zone:ap-south-1a kops.k8s.io/kops-controller-pki: kubernetes.io/arch:arm64 kubernetes.io/hostname:i-01f2014d35b8c12f6 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node.kubernetes.io/exclude-from-external-load-balancers: node.kubernetes.io/instance-type:c6g.large topology.ebs.csi.aws.com/zone:ap-south-1a topology.kubernetes.io/region:ap-south-1 topology.kubernetes.io/zone:ap-south-1a] map[csi.volume.kubernetes.io/nodeid:{"ebs.csi.aws.com":"i-01f2014d35b8c12f6"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-31 23:02:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}}} } {protokube Update v1 2023-01-31 23:02:36 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kops.k8s.io/kops-controller-pki":{},"f:node-role.kubernetes.io/control-plane":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}} } {aws-cloud-controller-manager Update v1 2023-01-31 23:03:28 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:beta.kubernetes.io/instance-type":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:taints":{}}} } {aws-cloud-controller-manager Update v1 2023-01-31 23:03:28 +0000 UTC FieldsV1 {"f:status":{"f:addresses":{"k:{\"type\":\"ExternalDNS\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"ExternalIP\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"Hostname\"}":{"f:address":{}},"k:{\"type\":\"InternalDNS\"}":{".":{},"f:address":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2023-01-31 23:03:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.0.0/24\"":{}}}} } {kubelet Update v1 2023-01-31 23:30:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.0.0/24,DoNotUseExternalID:,ProviderID:aws:///ap-south-1a/i-01f2014d35b8c12f6,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/control-plane,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[100.96.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{49756430336 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},hugepages-32Mi: {{0 0} {<nil>} 0 DecimalSI},hugepages-64Ki: {{0 0} {<nil>} 0 DecimalSI},memory: {{4002136064 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{44780787229 0} {<nil>} 44780787229 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},hugepages-32Mi: {{0 0} {<nil>} 0 DecimalSI},hugepages-64Ki: {{0 0} {<nil>} 0 DecimalSI},memory: {{3897278464 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-31 23:30:11 +0000 UTC,LastTransitionTime:2023-01-31 23:02:03 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-31 23:30:11 +0000 UTC,LastTransitionTime:2023-01-31 23:02:03 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-31 23:30:11 +0000 UTC,LastTransitionTime:2023-01-31 23:02:03 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-31 23:30:11 +0000 UTC,LastTransitionTime:2023-01-31 23:03:28 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.57.226,},NodeAddress{Type:ExternalIP,Address:13.234.119.60,},NodeAddress{Type:InternalDNS,Address:i-01f2014d35b8c12f6.ap-south-1.compute.internal,},NodeAddress{Type:Hostname,Address:i-01f2014d35b8c12f6.ap-south-1.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-13-234-119-60.ap-south-1.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec2b7a454b38daa04b90c6fd457c5ab6,SystemUUID:ec2b7a45-4b38-daa0-4b90-c6fd457c5ab6,BootID:9e578a7e-09ed-4dad-9613-7d6fc65f5c24,KernelVersion:5.15.0-1028-aws,OSImage:Ubuntu 22.04.1 LTS,ContainerRuntimeVersion:containerd://1.6.16,KubeletVersion:v1.27.0-alpha.1.161+c4ebbeeb747cd3,KubeProxyVersion:v1.27.0-alpha.1.161+c4ebbeeb747cd3,OperatingSystem:linux,Architecture:arm64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/etcd@sha256:9ce33ba33d8e738a5b85ed50b5080ac746deceed4a7496c550927a7a19ca3b6d registry.k8s.io/etcd:3.5.0-0],SizeBytes:157797353,},ContainerImage{Names:[quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5 quay.io/cilium/cilium:v1.12.5],SizeBytes:157636062,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:4afb99b4690b418ffc2ceb67e1a17376457e441c1f09ab55447f0aaf992fa646 registry.k8s.io/etcd:3.4.3-0],SizeBytes:151677080,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 registry.k8s.io/etcd:3.4.13-0],SizeBytes:134531559,},ContainerImage{Names:[registry.k8s.io/kube-apiserver-arm64:v1.27.0-alpha.1.161_c4ebbeeb747cd3],SizeBytes:131348942,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:0d2ab6b15b4fe0b903b9e81192d942fb39c29be61fe341cd16db5d0165b0435c registry.k8s.io/etcd:3.3.17-0],SizeBytes:126979107,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-arm64:v1.27.0-alpha.1.161_c4ebbeeb747cd3],SizeBytes:121191220,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:17da501f5d2a675be46040422a27b7cc21b8a43895ac998b171db1c346f361f7 registry.k8s.io/etcd:3.3.10-0],SizeBytes:81436097,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:6f72b851544986cb0921b53ea655ec04c36131248f16d4ad110cb3ca0c369dc1 registry.k8s.io/etcd:3.5.4-0],SizeBytes:81120118,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:13f53ed1d91e2e11aac476ee9a0269fdda6cc4874eba903efd40daf50c55eee5 registry.k8s.io/etcd:3.5.3-0],SizeBytes:81114891,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83 registry.k8s.io/etcd:3.5.7-0],SizeBytes:80665728,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c registry.k8s.io/etcd:3.5.6-0],SizeBytes:80539316,},ContainerImage{Names:[registry.k8s.io/kube-proxy-arm64:v1.27.0-alpha.1.161_c4ebbeeb747cd3],SizeBytes:63181512,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:905d7ca17fd02bc24c0eba9a062753aba15db3e31422390bc3238eb762339b20 registry.k8s.io/etcd:3.2.24-1],SizeBytes:61500433,},ContainerImage{Names:[registry.k8s.io/etcdadm/etcd-manager@sha256:81677375343460c8b223d5217ed2ee037d2578b470b3260637a735f57e5bdf04 registry.k8s.io/etcdadm/etcd-manager:v3.0.20230119-slim],SizeBytes:60827222,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:64b9ea357325d5db9f8a723dcf503b5a449177b17ac87d69481e126bb724c263 registry.k8s.io/etcd:3.5.1-0],SizeBytes:59553904,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-arm64:v1.27.0-alpha.1.161_c4ebbeeb747cd3],SizeBytes:56048435,},ContainerImage{Names:[registry.k8s.io/kops/kops-controller:1.27.0-alpha.1],SizeBytes:41210835,},ContainerImage{Names:[registry.k8s.io/kops/dns-controller:1.27.0-alpha.1],SizeBytes:39793999,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:f0c5de192d832e7c1daa6580d4a62e8fa6fc8eabc0917ae4cb7ed4d15e95b59e registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.1],SizeBytes:28932904,},ContainerImage{Names:[quay.io/cilium/operator@sha256:a6d24a006a6b92967ac90786b49bc1ac26e5477cf028cd1186efcfc2466484db quay.io/cilium/operator:v1.12.5],SizeBytes:24567287,},ContainerImage{Names:[registry.k8s.io/provider-aws/cloud-controller-manager@sha256:fdeb61e3e42ecd9cca868d550ebdb88dd6341d9e91fcfa9a37e227dab2ad22cb registry.k8s.io/provider-aws/cloud-controller-manager:v1.26.0],SizeBytes:18345226,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:8502345,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:7685305,},ContainerImage{Names:[registry.k8s.io/kops/kube-apiserver-healthcheck:1.27.0-alpha.1],SizeBytes:4691082,},ContainerImage{Names:[registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause:3.6],SizeBytes:253553,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 31 23:33:09.759: INFO: Logging kubelet events for node i-01f2014d35b8c12f6 E0131 23:33:09.847510 6788 retrywatcher.go:130] "Watch failed" err="context canceled" Jan 31 23:33:10.002: INFO: Logging pods the kubelet thinks is on node i-01f2014d35b8c12f6 Jan 31 23:33:10.261: INFO: cilium-operator-6f7d7f696f-2kr6d started at 2023-01-31 23:03:00 +0000 UTC (0+1 container statuses recorded) Jan 31 23:33:10.261: INFO: Container cilium-operator ready: true, restart count 0 Jan 31 23:33:10.261: INFO: dns-controller-85d56fc6db-b6qcv started at 2023-01-31 23:03:00 +0000 UTC (0+1 container statuses recorded) Jan 31 23:33:10.261: INFO: Container dns-controller ready: true, restart count 0 Jan 31 23:33:10.261: INFO: kops-controller-jjk5s started at 2023-01-31 23:03:00 +0000 UTC (0+1 container statuses recorded) Jan 31 23:33:10.261: INFO: Container kops-controller ready: true, restart count 0 Jan 31 23:33:10.261: INFO: aws-cloud-controller-manager-vwrlq started at 2023-01-31 23:03:00 +0000 UTC (0+1 container statuses recorded) Jan 31 23:33:10.261: INFO: Container aws-cloud-controller-manager ready: true, restart count 0 Jan 31 23:33:10.261: INFO: etcd-manager-events-i-01f2014d35b8c12f6 started at 2023-01-31 23:00:08 +0000 UTC (11+1 container statuses recorded) Jan 31 23:33:10.261: INFO: Init container init-etcd-3-2-24 ready: true, restart count 0 Jan 31 23:33:10.261: INFO: Init container init-etcd-3-3-10 ready: true, restart count 0 Jan 31 23:33:10.261: INFO: Init container init-etcd-3-3-17 ready: true, restart count 0 Jan 31 23:33:10.261: INFO: Init container init-etcd-3-4-13 ready: true, restart count 0 Jan 31 23:33:10.261: INFO: Init container init-etcd-3-4-3 ready: true, restart count 0 Jan 31 23:33:10.261: INFO: Init container init-etcd-3-5-0 ready: true, restart count 0 Jan 31 23:33:10.261: INFO: Init container init-etcd-3-5-1 ready: true, restart count 0 Jan 31 23:33:10.261: INFO: Init container init-etcd-3-5-3 ready: true, restart count 0 Jan 31 23:33:10.261: INFO: Init container init-etcd-3-5-4 ready: true, restart count 0 Jan 31 23:33:10.261: INFO: Init container init-etcd-3-5-6 ready: true, restart count 0 Jan 31 23:33:10.261: INFO: Init container init-etcd-3-5-7 ready: true, restart count 0 Jan 31 23:33:10.261: INFO: Container etcd-manager ready: true, restart count 0 Jan 31 23:33:10.261: INFO: ebs-csi-node-8r2xx started at 2023-01-31 23:03:00 +0000 UTC (0+3 container statuses recorded) Jan 31 23:33:10.261: INFO: Container ebs-plugin ready: true, restart count 0 Jan 31 23:33:10.261: INFO: Container liveness-probe ready: true, restart count 0 Jan 31 23:33:10.261: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 31 23:33:10.261: INFO: kube-controller-manager-i-01f2014d35b8c12f6 started at 2023-01-31 23:00:08 +0000 UTC (0+1 container statuses recorded) Jan 31 23:33:10.261: INFO: Container kube-controller-manager ready: true, restart count 4 Jan 31 23:33:10.261: INFO: kube-scheduler-i-01f2014d35b8c12f6 started at 2023-01-31 23:00:08 +0000 UTC (0+1 container statuses recorded) Jan 31 23:33:10.261: INFO: Container kube-scheduler ready: true, restart count 0 Jan 31 23:33:10.261: INFO: cilium-t6nbx started at 2023-01-31 23:03:00 +0000 UTC (1+1 container statuses recorded) Jan 31 23:33:10.261: INFO: Init container clean-cilium-state ready: true, restart count 0 Jan 31 23:33:10.261: INFO: Container cilium-agent ready: true, restart count 0 Jan 31 23:33:10.261: INFO: etcd-manager-main-i-01f2014d35b8c12f6 started at 2023-01-31 23:00:08 +0000 UTC (11+1 container statuses recorded) Jan 31 23:33:10.261: INFO: Init container init-etcd-3-2-24 ready: true, restart count 0 Jan 31 23:33:10.261: INFO: Init container init-etcd-3-3-10 ready: true, restart count 0 Jan 31 23:33:10.261: INFO: Init container init-etcd-3-3-17 ready: true, restart count 0 Jan 31 23:33:10.261: INFO: Init container init-etcd-3-4-13 ready: true, restart count 0 Jan 31 23:33:10.261: INFO: Init container init-etcd-3-4-3 ready: true, restart count 0 Jan 31 23:33:10.261: INFO: Init container init-etcd-3-5-0 ready: true, restart count 0 Jan 31 23:33:10.261: INFO: Init container init-etcd-3-5-1 ready: true, restart count 0 Jan 31 23:33:10.261: INFO: Init container init-etcd-3-5-3 ready: true, restart count 0 Jan 31 23:33:10.261: INFO: Init container init-etcd-3-5-4 ready: true, restart count 0 Jan 31 23:33:10.261: INFO: Init container init-etcd-3-5-6 ready: true, restart count 0 Jan 31 23:33:10.261: INFO: Init container init-etcd-3-5-7 ready: true, restart count 0 Jan 31 23:33:10.261: INFO: Container etcd-manager ready: true, restart count 0 Jan 31 23:33:10.261: INFO: kube-apiserver-i-01f2014d35b8c12f6 started at 2023-01-31 23:00:08 +0000 UTC (0+2 container statuses recorded) Jan 31 23:33:10.261: INFO: Container healthcheck ready: true, restart count 0 Jan 31 23:33:10.261: INFO: Container kube-apiserver ready: true, restart count 3 E0131 23:33:10.848222 6788 retrywatcher.go:130] "Watch failed" err="context canceled" Jan 31 23:33:11.034: INFO: Latency metrics for node i-01f2014d35b8c12f6 Jan 31 23:33:11.034: INFO: Logging node info for node i-06cd6913614b82fae Jan 31 23:33:11.273: INFO: Node Info: &Node{ObjectMeta:{i-06cd6913614b82fae 8e138151-af54-4408-a2ea-d3335d4fe064 51069 0 2023-01-31 23:04:37 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:arm64 beta.kubernetes.io/instance-type:t4g.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:ap-south-1 failure-domain.beta.kubernetes.io/zone:ap-south-1a kubernetes.io/arch:arm64 kubernetes.io/hostname:i-06cd6913614b82fae kubernetes.io/os:linux node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t4g.medium topology.ebs.csi.aws.com/zone:ap-south-1a topology.hostpath.csi/node:i-06cd6913614b82fae topology.kubernetes.io/region:ap-south-1 topology.kubernetes.io/zone:ap-south-1a] map[csi.volume.kubernetes.io/nodeid:{"ebs.csi.aws.com":"i-06cd6913614b82fae"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{aws-cloud-controller-manager Update v1 2023-01-31 23:04:37 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:beta.kubernetes.io/instance-type":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {aws-cloud-controller-manager Update v1 2023-01-31 23:04:37 +0000 UTC FieldsV1 {"f:status":{"f:addresses":{"k:{\"type\":\"ExternalDNS\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"ExternalIP\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"Hostname\"}":{"f:address":{}},"k:{\"type\":\"InternalDNS\"}":{".":{},"f:address":{},"f:type":{}}}}} status} {kops-controller Update v1 2023-01-31 23:04:37 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:node-role.kubernetes.io/node":{}}}} } {kube-controller-manager Update v1 2023-01-31 23:04:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.3.0/24\"":{}}}} } {kubelet Update v1 2023-01-31 23:04:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}}} } {kubelet Update v1 2023-01-31 23:29:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.3.0/24,DoNotUseExternalID:,ProviderID:aws:///ap-south-1a/i-06cd6913614b82fae,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{49756430336 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},hugepages-32Mi: {{0 0} {<nil>} 0 DecimalSI},hugepages-64Ki: {{0 0} {<nil>} 0 DecimalSI},memory: {{4027252736 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{44780787229 0} {<nil>} 44780787229 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},hugepages-32Mi: {{0 0} {<nil>} 0 DecimalSI},hugepages-64Ki: {{0 0} {<nil>} 0 DecimalSI},memory: {{3922395136 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-31 23:29:26 +0000 UTC,LastTransitionTime:2023-01-31 23:04:37 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-31 23:29:26 +0000 UTC,LastTransitionTime:2023-01-31 23:04:37 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-31 23:29:26 +0000 UTC,LastTransitionTime:2023-01-31 23:04:37 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-31 23:29:26 +0000 UTC,LastTransitionTime:2023-01-31 23:04:52 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.58.35,},NodeAddress{Type:ExternalIP,Address:13.234.59.214,},NodeAddress{Type:InternalDNS,Address:i-06cd6913614b82fae.ap-south-1.compute.internal,},NodeAddress{Type:Hostname,Address:i-06cd6913614b82fae.ap-south-1.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-13-234-59-214.ap-south-1.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec2fe6d62cd03fa1e9a0168b7ed51f3d,SystemUUID:ec2fe6d6-2cd0-3fa1-e9a0-168b7ed51f3d,BootID:489a2ef8-6182-4966-b727-c39ed14a8bfb,KernelVersion:5.15.0-1028-aws,OSImage:Ubuntu 22.04.1 LTS,ContainerRuntimeVersion:containerd://1.6.16,KubeletVersion:v1.27.0-alpha.1.161+c4ebbeeb747cd3,KubeProxyVersion:v1.27.0-alpha.1.161+c4ebbeeb747cd3,OperatingSystem:linux,Architecture:arm64,},Images:[]ContainerImage{ContainerImage{Names:[quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5 quay.io/cilium/cilium:v1.12.5],SizeBytes:157636062,},ContainerImage{Names:[registry.k8s.io/kube-proxy-arm64:v1.27.0-alpha.1.161_c4ebbeeb747cd3],SizeBytes:63181512,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:49808364,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nautilus@sha256:80ba6c8c44f9623f06e868a1aa66026c8ec438ad814f9ec95e9333b415fe3550 registry.k8s.io/e2e-test-images/nautilus:1.7],SizeBytes:47953400,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:3fe7acf013d1264ffded116b80a73dc129a449b0fccdb8d21af8279f2233f36e registry.k8s.io/e2e-test-images/httpd:2.4.39-4],SizeBytes:41528818,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40343601,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:f0c5de192d832e7c1daa6580d4a62e8fa6fc8eabc0917ae4cb7ed4d15e95b59e registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.1],SizeBytes:28932904,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:23945474,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:22678632,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:22434180,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:22394501,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nonroot@sha256:ee9f50b3c64b174d296d91ca9f69a914ac30e59095dfb462b2b518ad28a63655 registry.k8s.io/e2e-test-images/nonroot:1.4],SizeBytes:18561972,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18267611,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:8502345,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8223819,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:7685305,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:db048754ae68ae337d8fa96494c96d2a1204c3320f5dcf7e8e71085adec85da6 registry.k8s.io/e2e-test-images/nginx:1.15-4],SizeBytes:6883347,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:5c99cf6a02adda929b10321dbf4ecfa00d87be9ba4fb456006237d530ab4baa1 registry.k8s.io/e2e-test-images/nginx:1.14-4],SizeBytes:6860089,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac registry.k8s.io/e2e-test-images/nonewprivs:1.3],SizeBytes:5025027,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:773367,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:772933,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:268051,},ContainerImage{Names:[registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause:3.6],SizeBytes:253553,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 31 23:33:11.273: INFO: Logging kubelet events for node i-06cd6913614b82fae Jan 31 23:33:11.515: INFO: Logging pods the kubelet thinks is on node i-06cd6913614b82fae Jan 31 23:33:11.768: INFO: ebs-csi-node-cbq6c started at 2023-01-31 23:04:38 +0000 UTC (0+3 container statuses recorded) Jan 31 23:33:11.768: INFO: Container ebs-plugin ready: true, restart count 0 Jan 31 23:33:11.768: INFO: Container liveness-probe ready: true, restart count 0 Jan 31 23:33:11.768: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 31 23:33:11.768: INFO: cilium-7v65m started at 2023-01-31 23:04:38 +0000 UTC (1+1 container statuses recorded) Jan 31 23:33:11.768: INFO: Init container clean-cilium-state ready: true, restart count 0 Jan 31 23:33:11.768: INFO: Container cilium-agent ready: true, restart count 0 E0131 23:33:11.849357 6788 retrywatcher.go:130] "Watch failed" err="context canceled" Jan 31 23:33:12.564: INFO: Latency metrics for node i-06cd6913614b82fae Jan 31 23:33:12.564: INFO: Logging node info for node i-0b2bb17140492f5c5 Jan 31 23:33:12.803: INFO: Node Info: &Node{ObjectMeta:{i-0b2bb17140492f5c5 83b3f00d-e9f5-4159-b084-b1df88670707 51169 0 2023-01-31 23:04:57 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:arm64 beta.kubernetes.io/instance-type:t4g.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:ap-south-1 failure-domain.beta.kubernetes.io/zone:ap-south-1a kubernetes.io/arch:arm64 kubernetes.io/hostname:i-0b2bb17140492f5c5 kubernetes.io/os:linux node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t4g.medium topology.ebs.csi.aws.com/zone:ap-south-1a topology.hostpath.csi/node:i-0b2bb17140492f5c5 topology.kubernetes.io/region:ap-south-1 topology.kubernetes.io/zone:ap-south-1a] map[csi.volume.kubernetes.io/nodeid:{"ebs.csi.aws.com":"i-0b2bb17140492f5c5"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-31 23:04:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}}} } {aws-cloud-controller-manager Update v1 2023-01-31 23:04:58 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:beta.kubernetes.io/instance-type":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {aws-cloud-controller-manager Update v1 2023-01-31 23:04:58 +0000 UTC FieldsV1 {"f:status":{"f:addresses":{"k:{\"type\":\"ExternalDNS\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"ExternalIP\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"Hostname\"}":{"f:address":{}},"k:{\"type\":\"InternalDNS\"}":{".":{},"f:address":{},"f:type":{}}}}} status} {kops-controller Update v1 2023-01-31 23:04:58 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:node-role.kubernetes.io/node":{}}}} } {kube-controller-manager Update v1 2023-01-31 23:05:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.4.0/24\"":{}}}} } {kubelet Update v1 2023-01-31 23:29:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.4.0/24,DoNotUseExternalID:,ProviderID:aws:///ap-south-1a/i-0b2bb17140492f5c5,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{49756430336 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},hugepages-32Mi: {{0 0} {<nil>} 0 DecimalSI},hugepages-64Ki: {{0 0} {<nil>} 0 DecimalSI},memory: {{4027248640 0} {<nil>} 3932860Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{44780787229 0} {<nil>} 44780787229 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},hugepages-32Mi: {{0 0} {<nil>} 0 DecimalSI},hugepages-64Ki: {{0 0} {<nil>} 0 DecimalSI},memory: {{3922391040 0} {<nil>} 3830460Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-31 23:29:37 +0000 UTC,LastTransitionTime:2023-01-31 23:04:57 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-31 23:29:37 +0000 UTC,LastTransitionTime:2023-01-31 23:04:57 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-31 23:29:37 +0000 UTC,LastTransitionTime:2023-01-31 23:04:57 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-31 23:29:37 +0000 UTC,LastTransitionTime:2023-01-31 23:05:11 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.50.26,},NodeAddress{Type:ExternalIP,Address:52.66.200.119,},NodeAddress{Type:InternalDNS,Address:i-0b2bb17140492f5c5.ap-south-1.compute.internal,},NodeAddress{Type:Hostname,Address:i-0b2bb17140492f5c5.ap-south-1.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-52-66-200-119.ap-south-1.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec2fa46281a063ad46152d1efe136f1e,SystemUUID:ec2fa462-81a0-63ad-4615-2d1efe136f1e,BootID:3c4282db-f30e-427a-b2ce-67c1f32364c4,KernelVersion:5.15.0-1028-aws,OSImage:Ubuntu 22.04.1 LTS,ContainerRuntimeVersion:containerd://1.6.16,KubeletVersion:v1.27.0-alpha.1.161+c4ebbeeb747cd3,KubeProxyVersion:v1.27.0-alpha.1.161+c4ebbeeb747cd3,OperatingSystem:linux,Architecture:arm64,},Images:[]ContainerImage{ContainerImage{Names:[quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5 quay.io/cilium/cilium:v1.12.5],SizeBytes:157636062,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:116764683,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83 registry.k8s.io/etcd:3.5.7-0],SizeBytes:80665728,},ContainerImage{Names:[registry.k8s.io/kube-proxy-arm64:v1.27.0-alpha.1.161_c4ebbeeb747cd3],SizeBytes:63181512,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:49808364,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nautilus@sha256:80ba6c8c44f9623f06e868a1aa66026c8ec438ad814f9ec95e9333b415fe3550 registry.k8s.io/e2e-test-images/nautilus:1.7],SizeBytes:47953400,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:3fe7acf013d1264ffded116b80a73dc129a449b0fccdb8d21af8279f2233f36e registry.k8s.io/e2e-test-images/httpd:2.4.39-4],SizeBytes:41528818,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40343601,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:f0c5de192d832e7c1daa6580d4a62e8fa6fc8eabc0917ae4cb7ed4d15e95b59e registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.1],SizeBytes:28932904,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:23945474,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:22678632,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:22434180,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:22394501,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf registry.k8s.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:21331383,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nonroot@sha256:ee9f50b3c64b174d296d91ca9f69a914ac30e59095dfb462b2b518ad28a63655 registry.k8s.io/e2e-test-images/nonroot:1.4],SizeBytes:18561972,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18267611,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:13766234,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:8502345,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8223819,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:7685305,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:db048754ae68ae337d8fa96494c96d2a1204c3320f5dcf7e8e71085adec85da6 registry.k8s.io/e2e-test-images/nginx:1.15-4],SizeBytes:6883347,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:5c99cf6a02adda929b10321dbf4ecfa00d87be9ba4fb456006237d530ab4baa1 registry.k8s.io/e2e-test-images/nginx:1.14-4],SizeBytes:6860089,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:773367,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:772933,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:268051,},ContainerImage{Names:[registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause:3.6],SizeBytes:253553,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 31 23:33:12.804: INFO: Logging kubelet events for node i-0b2bb17140492f5c5 E0131 23:33:12.849880 6788 retrywatcher.go:130] "Watch failed" err="context canceled" Jan 31 23:33:13.046: INFO: Logging pods the kubelet thinks is on node i-0b2bb17140492f5c5 Jan 31 23:33:13.288: INFO: coredns-559769c974-7zfxc started at 2023-01-31 23:05:18 +0000 UTC (0+1 container statuses recorded) Jan 31 23:33:13.288: INFO: Container coredns ready: true, restart count 0 Jan 31 23:33:13.288: INFO: ebs-csi-node-sdgsw started at 2023-01-31 23:04:58 +0000 UTC (0+3 container statuses recorded) Jan 31 23:33:13.288: INFO: Container ebs-plugin ready: true, restart count 0 Jan 31 23:33:13.288: INFO: Container liveness-probe ready: true, restart count 0 Jan 31 23:33:13.288: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 31 23:33:13.288: INFO: test-runtimeclass-runtimeclass-4351-preconfigured-handler-5wrj4 started at 2023-01-31 23:28:06 +0000 UTC (0+1 container statuses recorded) Jan 31 23:33:13.288: INFO: Container test ready: false, restart count 0 Jan 31 23:33:13.288: INFO: cilium-ntn2p started at 2023-01-31 23:04:58 +0000 UTC (1+1 container statuses recorded) Jan 31 23:33:13.288: INFO: Init container clean-cilium-state ready: true, restart count 0 Jan 31 23:33:13.288: INFO: Container cilium-agent ready: true, restart count 0 E0131 23:33:13.850909 6788 retrywatcher.go:130] "Watch failed" err="context canceled" Jan 31 23:33:14.094: INFO: Latency metrics for node i-0b2bb17140492f5c5 Jan 31 23:33:14.094: INFO: Logging node info for node i-0f27661869b93b9cf Jan 31 23:33:14.334: INFO: Node Info: &Node{ObjectMeta:{i-0f27661869b93b9cf 5f59340c-df0c-453d-b0fd-7d5f6e92208a 51917 0 2023-01-31 23:04:33 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:arm64 beta.kubernetes.io/instance-type:t4g.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:ap-south-1 failure-domain.beta.kubernetes.io/zone:ap-south-1a kubernetes.io/arch:arm64 kubernetes.io/hostname:i-0f27661869b93b9cf kubernetes.io/os:linux node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t4g.medium topology.ebs.csi.aws.com/zone:ap-south-1a topology.hostpath.csi/node:i-0f27661869b93b9cf topology.kubernetes.io/region:ap-south-1 topology.kubernetes.io/zone:ap-south-1a] map[csi.volume.kubernetes.io/nodeid:{"ebs.csi.aws.com":"i-0f27661869b93b9cf"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{aws-cloud-controller-manager Update v1 2023-01-31 23:04:33 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:beta.kubernetes.io/instance-type":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {aws-cloud-controller-manager Update v1 2023-01-31 23:04:33 +0000 UTC FieldsV1 {"f:status":{"f:addresses":{"k:{\"type\":\"ExternalDNS\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"ExternalIP\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"Hostname\"}":{"f:address":{}},"k:{\"type\":\"InternalDNS\"}":{".":{},"f:address":{},"f:type":{}}}}} status} {kops-controller Update v1 2023-01-31 23:04:33 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:node-role.kubernetes.io/node":{}}}} } {kubelet Update v1 2023-01-31 23:04:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}}} } {kube-controller-manager Update v1 2023-01-31 23:04:52 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.1.0/24\"":{}}}} } {kubelet Update v1 2023-01-31 23:32:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.1.0/24,DoNotUseExternalID:,ProviderID:aws:///ap-south-1a/i-0f27661869b93b9cf,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{49756430336 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},hugepages-32Mi: {{0 0} {<nil>} 0 DecimalSI},hugepages-64Ki: {{0 0} {<nil>} 0 DecimalSI},memory: {{4027252736 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{44780787229 0} {<nil>} 44780787229 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},hugepages-32Mi: {{0 0} {<nil>} 0 DecimalSI},hugepages-64Ki: {{0 0} {<nil>} 0 DecimalSI},memory: {{3922395136 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-31 23:32:40 +0000 UTC,LastTransitionTime:2023-01-31 23:04:33 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-31 23:32:40 +0000 UTC,LastTransitionTime:2023-01-31 23:04:33 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-31 23:32:40 +0000 UTC,LastTransitionTime:2023-01-31 23:04:33 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-31 23:32:40 +0000 UTC,LastTransitionTime:2023-01-31 23:04:52 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.61.60,},NodeAddress{Type:ExternalIP,Address:13.235.94.235,},NodeAddress{Type:InternalDNS,Address:i-0f27661869b93b9cf.ap-south-1.compute.internal,},NodeAddress{Type:Hostname,Address:i-0f27661869b93b9cf.ap-south-1.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-13-235-94-235.ap-south-1.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec2c70c48641b5fea969fcf83efc8316,SystemUUID:ec2c70c4-8641-b5fe-a969-fcf83efc8316,BootID:26562ed8-b279-49ee-b7ff-83448bf31531,KernelVersion:5.15.0-1028-aws,OSImage:Ubuntu 22.04.1 LTS,ContainerRuntimeVersion:containerd://1.6.16,KubeletVersion:v1.27.0-alpha.1.161+c4ebbeeb747cd3,KubeProxyVersion:v1.27.0-alpha.1.161+c4ebbeeb747cd3,OperatingSystem:linux,Architecture:arm64,},Images:[]ContainerImage{ContainerImage{Names:[quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5 quay.io/cilium/cilium:v1.12.5],SizeBytes:157636062,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83 registry.k8s.io/etcd:3.5.7-0],SizeBytes:80665728,},ContainerImage{Names:[registry.k8s.io/kube-proxy-arm64:v1.27.0-alpha.1.161_c4ebbeeb747cd3],SizeBytes:63181512,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:49808364,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nautilus@sha256:80ba6c8c44f9623f06e868a1aa66026c8ec438ad814f9ec95e9333b415fe3550 registry.k8s.io/e2e-test-images/nautilus:1.7],SizeBytes:47953400,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:3fe7acf013d1264ffded116b80a73dc129a449b0fccdb8d21af8279f2233f36e registry.k8s.io/e2e-test-images/httpd:2.4.39-4],SizeBytes:41528818,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40343601,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:f0c5de192d832e7c1daa6580d4a62e8fa6fc8eabc0917ae4cb7ed4d15e95b59e registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.1],SizeBytes:28932904,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:23945474,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:22678632,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:22434180,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:22394501,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:122bfb8c1edabb3c0edd63f06523e6940d958d19b3957dc7b1d6f81e9f1f6119 registry.k8s.io/sig-storage/csi-provisioner:v3.1.0],SizeBytes:21735644,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:9ebbf9f023e7b41ccee3d52afe39a89e3ddacdbb69269d583abfc25847cfd9e4 registry.k8s.io/sig-storage/csi-resizer:v1.4.0],SizeBytes:20848413,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:8b9c313c05f54fb04f8d430896f5f5904b6cb157df261501b29adc04d2b2dc7b registry.k8s.io/sig-storage/csi-attacher:v3.4.0],SizeBytes:20565480,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:68d396900aeaa072c1f27289485fdac29834045a6f3ffe369bf389d830ef572d registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.6],SizeBytes:19007623,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18267611,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:13766234,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:8502345,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8223819,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:7685305,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:db048754ae68ae337d8fa96494c96d2a1204c3320f5dcf7e8e71085adec85da6 registry.k8s.io/e2e-test-images/nginx:1.15-4],SizeBytes:6883347,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:5c99cf6a02adda929b10321dbf4ecfa00d87be9ba4fb456006237d530ab4baa1 registry.k8s.io/e2e-test-images/nginx:1.14-4],SizeBytes:6860089,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac registry.k8s.io/e2e-test-images/nonewprivs:1.3],SizeBytes:5025027,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:2003505,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:772933,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:268051,},ContainerImage{Names:[registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause:3.6],SizeBytes:253553,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 31 23:33:14.335: INFO: Logging kubelet events for node i-0f27661869b93b9cf Jan 31 23:33:14.577: INFO: Logging pods the kubelet thinks is on node i-0f27661869b93b9cf Jan 31 23:33:14.833: INFO: cilium-7ht4d started at 2023-01-31 23:04:34 +0000 UTC (1+1 container statuses recorded) Jan 31 23:33:14.833: INFO: Init container clean-cilium-state ready: true, restart count 0 Jan 31 23:33:14.833: INFO: Container cilium-agent ready: true, restart count 0 Jan 31 23:33:14.833: INFO: ebs-csi-node-t8vbr started at 2023-01-31 23:04:34 +0000 UTC (0+3 container statuses recorded) Jan 31 23:33:14.833: INFO: Container ebs-plugin ready: true, restart count 0 Jan 31 23:33:14.833: INFO: Container liveness-probe ready: true, restart count 0 Jan 31 23:33:14.833: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 31 23:33:14.833: INFO: coredns-559769c974-dl7c9 started at 2023-01-31 23:04:52 +0000 UTC (0+1 container statuses recorded) Jan 31 23:33:14.833: INFO: Container coredns ready: true, restart count 0 Jan 31 23:33:14.833: INFO: coredns-autoscaler-7cb5c5b969-zbdt5 started at 2023-01-31 23:04:52 +0000 UTC (0+1 container statuses recorded) Jan 31 23:33:14.833: INFO: Container autoscaler ready: true, restart count 0 Jan 31 23:33:14.833: INFO: ebs-csi-controller-555658678f-wvzsz started at 2023-01-31 23:04:52 +0000 UTC (0+5 container statuses recorded) Jan 31 23:33:14.833: INFO: Container csi-attacher ready: true, restart count 0 Jan 31 23:33:14.833: INFO: Container csi-provisioner ready: true, restart count 0 Jan 31 23:33:14.833: INFO: Container csi-resizer ready: true, restart count 0 Jan 31 23:33:14.833: INFO: Container ebs-plugin ready: true, restart count 0 Jan 31 23:33:14.833: INFO: Container liveness-probe ready: true, restart count 0 E0131 23:33:14.851181 6788 retrywatcher.go:130] "Watch failed" err="context canceled" Jan 31 23:33:15.635: INFO: Latency metrics for node i-0f27661869b93b9cf Jan 31 23:33:15.635: INFO: Logging node info for node i-0f91383546e2b25fb E0131 23:33:15.851528 6788 retrywatcher.go:130] "Watch failed" err="context canceled" Jan 31 23:33:15.875: INFO: Node Info: &Node{ObjectMeta:{i-0f91383546e2b25fb a0707f7c-2453-494d-b014-b616afd38869 51338 0 2023-01-31 23:04:37 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:arm64 beta.kubernetes.io/instance-type:t4g.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:ap-south-1 failure-domain.beta.kubernetes.io/zone:ap-south-1a io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:arm64 kubernetes.io/hostname:i-0f91383546e2b25fb kubernetes.io/os:linux node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t4g.medium topology.ebs.csi.aws.com/zone:ap-south-1a topology.hostpath.csi/node:i-0f91383546e2b25fb topology.kubernetes.io/region:ap-south-1 topology.kubernetes.io/zone:ap-south-1a] map[csi.volume.kubernetes.io/nodeid:{"ebs.csi.aws.com":"i-0f91383546e2b25fb"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{aws-cloud-controller-manager Update v1 2023-01-31 23:04:37 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:beta.kubernetes.io/instance-type":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {aws-cloud-controller-manager Update v1 2023-01-31 23:04:37 +0000 UTC FieldsV1 {"f:status":{"f:addresses":{"k:{\"type\":\"ExternalDNS\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"ExternalIP\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"Hostname\"}":{"f:address":{}},"k:{\"type\":\"InternalDNS\"}":{".":{},"f:address":{},"f:type":{}}}}} status} {kops-controller Update v1 2023-01-31 23:04:37 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:node-role.kubernetes.io/node":{}}}} } {kube-controller-manager Update v1 2023-01-31 23:04:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.2.0/24\"":{}}}} } {kubelet Update v1 2023-01-31 23:04:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}}} } {kubelet Update v1 2023-01-31 23:30:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:io.kubernetes.storage.mock/node":{},"f:topology.ebs.csi.aws.com/zone":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.2.0/24,DoNotUseExternalID:,ProviderID:aws:///ap-south-1a/i-0f91383546e2b25fb,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{49756430336 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},hugepages-32Mi: {{0 0} {<nil>} 0 DecimalSI},hugepages-64Ki: {{0 0} {<nil>} 0 DecimalSI},memory: {{4027252736 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{44780787229 0} {<nil>} 44780787229 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},hugepages-32Mi: {{0 0} {<nil>} 0 DecimalSI},hugepages-64Ki: {{0 0} {<nil>} 0 DecimalSI},memory: {{3922395136 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-31 23:30:05 +0000 UTC,LastTransitionTime:2023-01-31 23:04:37 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-31 23:30:05 +0000 UTC,LastTransitionTime:2023-01-31 23:04:37 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-31 23:30:05 +0000 UTC,LastTransitionTime:2023-01-31 23:04:37 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-31 23:30:05 +0000 UTC,LastTransitionTime:2023-01-31 23:04:52 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.56.88,},NodeAddress{Type:ExternalIP,Address:13.233.31.6,},NodeAddress{Type:InternalDNS,Address:i-0f91383546e2b25fb.ap-south-1.compute.internal,},NodeAddress{Type:Hostname,Address:i-0f91383546e2b25fb.ap-south-1.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-13-233-31-6.ap-south-1.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec2c500fcdf67f460ecaa4185c5bed0c,SystemUUID:ec2c500f-cdf6-7f46-0eca-a4185c5bed0c,BootID:4c8c5b6a-893f-4392-9cae-92a9be10c6a4,KernelVersion:5.15.0-1028-aws,OSImage:Ubuntu 22.04.1 LTS,ContainerRuntimeVersion:containerd://1.6.16,KubeletVersion:v1.27.0-alpha.1.161+c4ebbeeb747cd3,KubeProxyVersion:v1.27.0-alpha.1.161+c4ebbeeb747cd3,OperatingSystem:linux,Architecture:arm64,},Images:[]ContainerImage{ContainerImage{Names:[quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5 quay.io/cilium/cilium:v1.12.5],SizeBytes:157636062,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:116764683,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83 registry.k8s.io/etcd:3.5.7-0],SizeBytes:80665728,},ContainerImage{Names:[registry.k8s.io/kube-proxy-arm64:v1.27.0-alpha.1.161_c4ebbeeb747cd3],SizeBytes:63181512,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:49808364,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nautilus@sha256:80ba6c8c44f9623f06e868a1aa66026c8ec438ad814f9ec95e9333b415fe3550 registry.k8s.io/e2e-test-images/nautilus:1.7],SizeBytes:47953400,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:3fe7acf013d1264ffded116b80a73dc129a449b0fccdb8d21af8279f2233f36e registry.k8s.io/e2e-test-images/httpd:2.4.39-4],SizeBytes:41528818,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40343601,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:f0c5de192d832e7c1daa6580d4a62e8fa6fc8eabc0917ae4cb7ed4d15e95b59e registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.1],SizeBytes:28932904,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:23945474,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/sample-apiserver@sha256:8d70890151aa5d096f331cb9da1b9cd5be0412b7363fe67b5c3befdcaa2a28d0 registry.k8s.io/e2e-test-images/sample-apiserver:1.17.7],SizeBytes:23857462,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:22678632,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:22434180,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:22394501,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:122bfb8c1edabb3c0edd63f06523e6940d958d19b3957dc7b1d6f81e9f1f6119 registry.k8s.io/sig-storage/csi-provisioner:v3.1.0],SizeBytes:21735644,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:9ebbf9f023e7b41ccee3d52afe39a89e3ddacdbb69269d583abfc25847cfd9e4 registry.k8s.io/sig-storage/csi-resizer:v1.4.0],SizeBytes:20848413,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:8b9c313c05f54fb04f8d430896f5f5904b6cb157df261501b29adc04d2b2dc7b registry.k8s.io/sig-storage/csi-attacher:v3.4.0],SizeBytes:20565480,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18267611,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:8502345,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8223819,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:7685305,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:db048754ae68ae337d8fa96494c96d2a1204c3320f5dcf7e8e71085adec85da6 registry.k8s.io/e2e-test-images/nginx:1.15-4],SizeBytes:6883347,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:5c99cf6a02adda929b10321dbf4ecfa00d87be9ba4fb456006237d530ab4baa1 registry.k8s.io/e2e-test-images/nginx:1.14-4],SizeBytes:6860089,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:773367,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:772933,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:268051,},ContainerImage{Names:[registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause:3.6],SizeBytes:253553,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 31 23:33:15.875: INFO: Logging kubelet events for node i-0f91383546e2b25fb Jan 31 23:33:16.117: INFO: Logging pods the kubelet thinks is on node i-0f91383546e2b25fb Jan 31 23:33:16.371: INFO: ebs-csi-node-t87dv started at 2023-01-31 23:04:37 +0000 UTC (0+3 container statuses recorded) Jan 31 23:33:16.371: INFO: Container ebs-plugin ready: true, restart count 0 Jan 31 23:33:16.371: INFO: Container liveness-probe ready: true, restart count 0 Jan 31 23:33:16.371: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 31 23:33:16.371: INFO: ebs-csi-controller-555658678f-z2p72 started at 2023-01-31 23:04:54 +0000 UTC (0+5 container statuses recorded) Jan 31 23:33:16.371: INFO: Container csi-attacher ready: true, restart count 0 Jan 31 23:33:16.371: INFO: Container csi-provisioner ready: true, restart count 0 Jan 31 23:33:16.371: INFO: Container csi-resizer ready: true, restart count 0 Jan 31 23:33:16.371: INFO: Container ebs-plugin ready: true, restart count 0 Jan 31 23:33:16.371: INFO: Container liveness-probe ready: true, restart count 0 Jan 31 23:33:16.371: INFO: cilium-98pb9 started at 2023-01-31 23:04:37 +0000 UTC (1+1 container statuses recorded) Jan 31 23:33:16.371: INFO: Init container clean-cilium-state ready: true, restart count 0 Jan 31 23:33:16.371: INFO: Container cilium-agent ready: true, restart count 0 E0131 23:33:16.852289 6788 retrywatcher.go:130] "Watch failed" err="context canceled" Jan 31 23:33:17.182: INFO: Latency metrics for node i-0f91383546e2b25fb END STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/31/23 23:33:17.182 (8.631s) < Exit [DeferCleanup (Each)] [sig-node] RuntimeClass - dump namespaces | framework.go:206 @ 01/31/23 23:33:17.182 (8.632s) > Enter [DeferCleanup (Each)] [sig-node] RuntimeClass - tear down framework | framework.go:203 @ 01/31/23 23:33:17.182 STEP: Destroying namespace "runtimeclass-4351" for this suite. - test/e2e/framework/framework.go:347 @ 01/31/23 23:33:17.182 < Exit [DeferCleanup (Each)] [sig-node] RuntimeClass - tear down framework | framework.go:203 @ 01/31/23 23:33:17.423 (241ms) > Enter [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/31/23 23:33:17.423 < Exit [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/31/23 23:33:17.423 (0s)
Find runtimeclass-4351/test-runtimeclass-runtimeclass-4351-preconfigured-handler-5wrj4 mentions in log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-node\]\sRuntimeClass\sshould\srun\sa\sPod\srequesting\sa\sRuntimeClass\swith\sscheduling\swithout\staints$'
[FAILED] timed out while waiting for pod runtimeclass-3304/test-runtimeclass-runtimeclass-3304-non-conflict-runtimeclk58r4 to be not pending In [It] at: test/e2e/node/runtimeclass.go:167 @ 01/31/23 23:13:32.607from junit_01.xml
> Enter [BeforeEach] [sig-node] RuntimeClass - set up framework | framework.go:188 @ 01/31/23 23:08:16.763 STEP: Creating a kubernetes client - test/e2e/framework/framework.go:208 @ 01/31/23 23:08:16.764 Jan 31 23:08:16.764: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename runtimeclass - test/e2e/framework/framework.go:247 @ 01/31/23 23:08:16.765 STEP: Waiting for a default service account to be provisioned in namespace - test/e2e/framework/framework.go:256 @ 01/31/23 23:08:17.486 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace - test/e2e/framework/framework.go:259 @ 01/31/23 23:08:17.963 < Exit [BeforeEach] [sig-node] RuntimeClass - set up framework | framework.go:188 @ 01/31/23 23:08:18.441 (1.677s) > Enter [BeforeEach] [sig-node] RuntimeClass - test/e2e/framework/metrics/init/init.go:33 @ 01/31/23 23:08:18.441 < Exit [BeforeEach] [sig-node] RuntimeClass - test/e2e/framework/metrics/init/init.go:33 @ 01/31/23 23:08:18.441 (0s) > Enter [It] should run a Pod requesting a RuntimeClass with scheduling without taints - test/e2e/node/runtimeclass.go:131 @ 01/31/23 23:08:18.441 Jan 31 23:08:19.162: INFO: Waiting up to 5m0s for pod "hostexec-i-0f27661869b93b9cf-4lr6w" in namespace "runtimeclass-3304" to be "running" Jan 31 23:08:19.402: INFO: Pod "hostexec-i-0f27661869b93b9cf-4lr6w": Phase="Pending", Reason="", readiness=false. Elapsed: 239.576247ms Jan 31 23:08:21.641: INFO: Pod "hostexec-i-0f27661869b93b9cf-4lr6w": Phase="Pending", Reason="", readiness=false. Elapsed: 2.479226753s Jan 31 23:08:23.642: INFO: Pod "hostexec-i-0f27661869b93b9cf-4lr6w": Phase="Running", Reason="", readiness=true. Elapsed: 4.479518584s Jan 31 23:08:23.642: INFO: Pod "hostexec-i-0f27661869b93b9cf-4lr6w" satisfied condition "running" Jan 31 23:08:23.642: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c if [ -e '/etc/containerd/config.toml' ]; then grep -q 'runtimes.test-handler' /etc/containerd/config.toml return fi if [ -e '/etc/crio/crio.conf' ]; then grep -q 'runtimes.test-handler' /etc/crio/crio.conf return fi ] Namespace:runtimeclass-3304 PodName:hostexec-i-0f27661869b93b9cf-4lr6w ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jan 31 23:08:23.642: INFO: >>> kubeConfig: /root/.kube/config Jan 31 23:08:23.643: INFO: ExecWithOptions: Clientset creation Jan 31 23:08:23.643: INFO: ExecWithOptions: execute(POST https://api-e2e-e2e-kops-warm-poo-agujp3-6251a9cd4adb6f29.elb.ap-south-1.amazonaws.com/api/v1/namespaces/runtimeclass-3304/pods/hostexec-i-0f27661869b93b9cf-4lr6w/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=if+%5B+-e+%27%2Fetc%2Fcontainerd%2Fconfig.toml%27+%5D%3B+then%0Agrep+-q+%27runtimes.test-handler%27+%2Fetc%2Fcontainerd%2Fconfig.toml%0A++return%0Afi%0A%0Aif+%5B+-e+%27%2Fetc%2Fcrio%2Fcrio.conf%27+%5D%3B+then%0A++grep+-q+%27runtimes.test-handler%27+%2Fetc%2Fcrio%2Fcrio.conf%0A++return%0Afi%0A&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) STEP: Trying to launch a pod without a label to get a node which can launch it. - test/e2e/scheduling/predicates.go:1038 @ 01/31/23 23:08:25.174 Jan 31 23:08:25.417: INFO: Waiting up to 2m0s for pod "without-label" in namespace "runtimeclass-3304" to be "running" Jan 31 23:08:25.658: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 240.717455ms Jan 31 23:08:27.900: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 2.482963932s Jan 31 23:08:29.900: INFO: Pod "without-label": Phase="Running", Reason="", readiness=true. Elapsed: 4.482699036s Jan 31 23:08:29.900: INFO: Pod "without-label" satisfied condition "running" STEP: Explicitly delete pod here to free the resource it takes. - test/e2e/scheduling/predicates.go:969 @ 01/31/23 23:08:30.161 STEP: Trying to apply a label on the found node. - test/e2e/node/runtimeclass.go:148 @ 01/31/23 23:08:30.415 STEP: verifying the node has the label foo-5a6a5824-62ca-418b-8190-70a85b2542a5 bar - test/e2e/framework/node/helper.go:63 @ 01/31/23 23:08:30.661 STEP: verifying the node has the label fizz-03a4514a-9caf-4dde-975c-666ffcff9781 buzz - test/e2e/framework/node/helper.go:63 @ 01/31/23 23:08:31.154 STEP: Trying to create runtimeclass and pod - test/e2e/node/runtimeclass.go:155 @ 01/31/23 23:08:31.395 Jan 31 23:08:31.884: INFO: Waiting up to 5m0s for pod "test-runtimeclass-runtimeclass-3304-non-conflict-runtimeclk58r4" in namespace "runtimeclass-3304" to be "not pending" Jan 31 23:08:32.123: INFO: Pod "test-runtimeclass-runtimeclass-3304-non-conflict-runtimeclk58r4": Phase="Pending", Reason="", readiness=false. Elapsed: 239.340734ms Jan 31 23:08:34.365: INFO: Pod "test-runtimeclass-runtimeclass-3304-non-conflict-runtimeclk58r4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.481157571s Jan 31 23:08:36.364: INFO: Pod "test-runtimeclass-runtimeclass-3304-non-conflict-runtimeclk58r4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.480148918s Jan 31 23:08:38.364: INFO: Pod "test-runtimeclass-runtimeclass-3304-non-conflict-runtimeclk58r4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.479911301s Jan 31 23:08:40.376: INFO: Pod "test-runtimeclass-runtimeclass-3304-non-conflict-runtimeclk58r4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.492052791s Jan 31 23:08:42.363: INFO: Pod "test-runtimeclass-runtimeclass-3304-non-conflict-runtimeclk58r4": Phase="Pending", Reason="", readiness=false. Elapsed: 10.479300701s Jan 31 23:08:44.364: INFO: Pod "test-runtimeclass-runtimeclass-3304-non-conflict-runtimeclk58r4": Phase="Pending", Reason="", readiness=false. Elapsed: 12.480117522s Jan 31 23:08:46.364: INFO: Pod "test-runtimeclass-runtimeclass-3304-non-conflict-runtimeclk58r4": Phase="Pending", Reason="", readiness=false. Elapsed: 14.480529831s Jan 31 23:08:48.364: INFO: Pod "test-runtimeclass-runtimeclass-3304-non-conflict-runtimeclk58r4": Phase="Pending", Reason="", readiness=false. Elapsed: 16.479976771s Jan 31 23:08:50.363: INFO: Pod "test-runtimeclass-runtimeclass-3304-non-conflict-runtimeclk58r4": Phase="Pending", Reason="", readiness=false. Elapsed: 18.479547973s Jan 31 23:08:52.364: INFO: Pod "test-runtimeclass-runtimeclass-3304-non-conflict-runtimeclk58r4": Phase="Pending", Reason="", readiness=false. Elapsed: 20.47973715s Jan 31 23:08:54.379: INFO: Pod "test-runtimeclass-runtimeclass-3304-non-conflict-runtimeclk58r4": Phase="Pending", Reason="", readiness=false. Elapsed: 22.495252546s Jan 31 23:08:56.363: INFO: Pod "test-runtimeclass-runtimeclass-3304-non-conflict-runtimeclk58r4": Phase="Pending", Reason="", readiness=false. Elapsed: 24.479611922s Jan 31 23:08:58.367: INFO: Pod "test-runtimeclass-runtimeclass-3304-non-conflict-runtimeclk58r4": Phase="Pending", Reason="", readiness=false. Elapsed: 26.482998568s Jan 31 23:09:00.364: INFO: Pod "test-runtimeclass-runtimeclass-3304-non-conflict-runtimeclk58r4": Phase="Pending", Reason="", readiness=false. Elapsed: 28.479648259s Jan 31 23:09:02.365: INFO: Pod "test-runtimeclass-runtimeclass-3304-non-conflict-runtimeclk58r4": Phase="Pending", Reason="", readiness=false. Elapsed: 30.481385084s Jan 31 23:09:04.364: INFO: Pod "test-runtimeclass-runtimeclass-3304-non-conflict-runtimeclk58r4": Phase="Pending", Reason="", readiness=false. Elapsed: 32.479738266s Jan 31 23:09:06.364: INFO: Pod "test-runtimeclass-runtimeclass-3304-non-conflict-runtimeclk58r4": Phase="Pending", Reason="", readiness=false. Elapsed: 34.480538301s Jan 31 23:09:08.363: INFO: Pod "test-runtimeclass-runtimeclass-3304-non-conflict-runtimeclk58r4": Phase="Pending", Reason="", readiness=false. Elapsed: 36.479472128s Jan 31 23:09:10.364: INFO: Pod "test-runtimeclass-runtimeclass-3304-non-conflict-runtimeclk58r4": Phase="Pending", Reason="", readiness=false. Elapsed: 38.479745147s Jan 31 23:09:12.365: INFO: Pod "test-runtimeclass-runtimeclass-3304-non-conflict-runtimeclk58r4": Phase="Pending", Reason="", readiness=false. Elapsed: 40.481151022s Jan 31 23:09:14.364: INFO: Pod "test-runtimeclass-runtimeclass-3304-non-conflict-runtimeclk58r4": Phase="Pending", Reason="", readiness=false. Elapsed: 42.479717584s Jan 31 23:09:16.364: INFO: Pod "test-runtimeclass-runtimeclass-3304-non-conflict-runtimeclk58r4": Phase="Pending", Reason="", readiness=false. Elapsed: 44.480504431s Jan 31 23:09:18.364: INFO: Pod "test-runtimeclass-runtimeclass-3304-non-conflict-runtimeclk58r4": Phase="Pending", Reason="", readiness=false. Elapsed: 46.480183655s Jan 31 23:09:20.363: INFO: Pod "test-runtimeclass-runtimeclass-3304-non-conflict-runtimeclk58r4": Phase="Pending", Reason="", readiness=false. Elapsed: 48.479455001s Jan 31 23:09:22.364: INFO: Pod "test-runtimeclass-runtimeclass-3304-non-conflict-runtimeclk58r4": Phase="Pending", Reason="", readiness=false. Elapsed: 50.48057779s Jan 31 23:09:24.364: INFO: Pod "test-runtimeclass-runtimeclass-3304-non-conflict-runtimeclk58r4": Phase="Pending", Reason="", readiness=false. Elapsed: 52.480498661s Jan 31 23:09:26.364: INFO: Pod "test-runtimeclass-runtimeclass-3304-non-conflict-runtimeclk58r4": Phase="Pending", Reason="", readiness=false. Elapsed: 54.480262322s Jan 31 23:09:28.364: INFO: Pod "test-runtimeclass-runtimeclass-3304-non-conflict-runtimeclk58r4": Phase="Pending", Reason="", readiness=false. Elapsed: 56.479903717s Jan 31 23:09:30.363: INFO: Pod "test-runtimeclass-runtimeclass-3304-non-conflict-runtimeclk58r4": Phase="Pending", Reason="", readiness=false. Elapsed: 58.479288741s Jan 31 23:09:32.367: INFO: Pod "test-runtimeclass-runtimeclass-3304-non-conflict-runtimeclk58r4": Phase="Pending", Reason="", readiness=false. Elapsed: 1m0.482887434s Jan 31 23:09:34.365: INFO: Pod "test-runtimeclass-runtimeclass-3304-non-conflict-runtimeclk58r4": Phase="Pending", Reason="", readiness=false. Elapsed: 1m2.480998273s Jan 31 23:09:36.376: INFO: Pod "test-runtimeclass-runtimeclass-3304-non-conflict-runtimeclk58r4": Phase="Pending", Reason="", readiness=false. Elapsed: 1m4.492015938s Jan 31 23:09:38.364: INFO: Pod "test-runtimeclass-runtimeclass-3304-non-conflict-runtimeclk58r4": Phase="Pending", Reason="", readiness=false. Elapsed: 1m6.47980519s Jan 31 23:09:40.367: INFO: Pod "test-runtimeclass-runtimeclass-3304-non-conflict-runtimeclk58r4": Phase="Pending", Reason="", readiness=false. Elapsed: 1m8.482986417s Jan 31 23:09:42.365: INFO: Pod "test-runtimeclass-runtimeclass-3304-non-conflict-runtimeclk58r4": Phase="Pending", Reason="", readiness=false. Elapsed: 1m10.480742876s Jan 31 23:09:44.363: INFO: Pod "test-runtimeclass-runtimeclass-3304-non-conflict-runtimeclk58r4": Phase="Pending", Reason="", readiness=false. Elapsed: 1m12.479227803s Jan 31 23:09:46.364: INFO: Pod "test-runtimeclass-runtimeclass-3304-non-conflict-runtimeclk58r4": Phase="Pending", Reason="", readiness=false. Elapsed: 1m14.479643796s Jan 31 23:09:48.369: INFO: Pod "test-runtimeclass-runtimeclass-3304-non-conflict-runtimeclk58r4": Phase="Pending", Reason="", readiness=false. Elapsed: 1m16.485405867s Jan 31 23:09:50.365: INFO: Pod "test-runtimeclass-runtimeclass-3304-non-conflict-runtimeclk58r4": Phase="Pending", Reason="", readiness=false. Elapsed: 1m18.48091005s Jan 31 23:09:52.365: INFO: Pod "test-runtimeclass-runtimeclass-3304-non-conflict-runtimeclk58r4": Phase="Pending", Reason="", readiness=false. Elapsed: 1m20.480707689s Jan 31 23:09:54.365: INFO: Pod "test-runtimeclass-runtimeclass-3304-non-conflict-runtimeclk58r4": Phase="Pending", Reason="", readiness=false. Elapsed: 1m22.481309991s Jan 31 23:09:56.363: INFO: Pod "test-runtimeclass-runtimeclass-3304-non-conflict-runtimeclk58r4": Phase="Pending", Reason="", readiness=false. Elapsed: 1m24.478950531s Jan 31 23:09:58.364: INFO: Pod "test-runtimeclass-runtimeclass-3304-non-conflict-runtimeclk58r4": Phase="Pending", Reason="", readiness=false. Elapsed: 1m26.479813961s Jan 31 23:10:00.364: INFO: Pod "test-runtimeclass-runtimeclass-3304-non-conflict-runtimeclk58r4": Phase="Pending", Reason="", readiness=false. Elapsed: 1m28.479899021s Jan 31 23:10:02.364: INFO: Pod "test-runtimeclass-runtimeclass-3304-non-conflict-runtimeclk58r4": Phase="Pending", Reason="", readiness=false. Elapsed: 1m30.479764142s Jan 31 23:10:04.368: INFO: Pod "test-runtimeclass-runtimeclass-3304-non-conflict-runtimeclk58r4": Phase="Pending", Reason="", readiness=false. Elapsed: 1m32.484456933s Jan 31 23:10:06.364: INFO: Pod "test-runtimeclass-runtimeclass-3304-non-conflict-runtimeclk58r4": Phase="Pending", Reason="", readiness=false. Elapsed: 1m34.480540549s Jan 31 23:10:08.364: INFO: Pod "test-runtimeclass-runtimeclass-3304-non-conflict-runtimeclk58r4": Phase="Pending", Reason="", readiness=false. Elapsed: 1m36.479906922s Jan 31 23:10:10.364: INFO: Pod "test-runtimeclass-runtimeclass-3304-non-conflict-runtimeclk58r4": Phase="Pending", Reason="", readiness=false. Elapsed: 1m38.480004975s Jan 31 23:10:12.363: INFO: Pod "test-runtimeclass-runtimeclass-3304-non-conflict-runtimeclk58r4": Phase="Pending", Reason="", readiness=false. Elapsed: 1m40.47915703s Jan 31 23:10:14.363: INFO: Pod "test-runtimeclass-runtimeclass-3304-non-conflict-runtimeclk58r4": Phase="Pending", Reason="", readiness=false. Elapsed: 1m42.479482715s Jan 31 23:10:16.363: INFO: Pod "test-runtimeclass-runtimeclass-3304-non-conflict-runtimeclk58r4": Phase="Pending", Reason="", readiness=false. Elapsed: 1m44.479392225s Jan 31 23:10:18.364: INFO: Pod "test-runtimeclass-runtimeclass-3304-non-conflict-runtimeclk58r4": Phase="Pending", Reason="", readiness=false. Elapsed: 1m46.479762864s Jan 31 23:10:20.363: INFO: Pod "test-runtimeclass-runtimeclass-3304-non-conflict-runtimeclk58r4": Phase="Pending", Reason="", readiness=false. Elapsed: 1m48.478926999s Jan 31 23:10:22.366: INFO: Pod "test-runtimeclass-runtimeclass-3304-non-conflict-runtimeclk58r4": Phase="Pending", Reason="", readiness=false. Elapsed: 1m50.482513242s Jan 31 23:10:24.370: INFO: Pod "test-runtimeclass-runtimeclass-3304-non-conflict-runtimeclk58r4": Phase="Pending", Reason="", readiness=false. Elapsed: 1m52.485890394s Jan 31 23:10:26.363: INFO: Pod "test-runtimeclass-runtimeclass-3304-non-conflict-runtimeclk58r4": Phase="Pending", Reason="", readiness=false. Elapsed: 1m54.479586187s Jan 31 23:10:28.363: INFO: Pod "test-runtimeclass-runtimeclass-3304-non-conflict-runtimeclk58r4": Phase="Pending", Reason="", readiness=false. Elapsed: 1m56.479532478s Jan 31 23:10:30.364: INFO: Pod "test-runtimeclass-runtimeclass-3304-non-conflict-runtimeclk58r4": Phase="Pending", Reason="", readiness=false. Elapsed: 1m58.479648392s Jan 31 23:10:32.364: INFO: Pod "test-runtimeclass-runtimeclass-3304-non-conflict-runtimeclk58r4": Phase="Pending", Reason="", readiness=false. Elapsed: 2m0.479792731s Jan 31 23:10:34.368: INFO: Pod "test-runtimeclass-runtimeclass-3304-non-conflict-runtimeclk58r4": Phase="Pending", Reason="", readiness=false. Elapsed: 2m2.484474928s Jan 31 23:10:36.372: INFO: Pod "test-runtimeclass-runtimeclass-3304-non-conflict-runtimeclk58r4": Phase="Pending", Reason="", readiness=false. Elapsed: 2m4.48819219s Jan 31 23:10:38.365: INFO: Pod "test-runtimeclass-runtimeclass-3304-non-conflict-runtimeclk58r4": Phase="Pending", Reason="", readiness=false. Elapsed: 2m6.480700891s Jan 31 23:10:40.366: INFO: Pod "test-runtimeclass-runtimeclass-3304-non-conflict-runtimeclk58r4": Phase="Pending", Reason="", readiness=false. Elapsed: 2m8.48188179s Jan 31 23:10:42.364: INFO: Pod "test-runtimeclass-runtimeclass-3304-non-conflict-runtimeclk58r4": Phase="Pending", Reason="", readiness=false. Elapsed: 2m10.479719452s Jan 31 23:10:44.363: INFO: Pod "test-runtimeclass-runtimeclass-3304-non-conflict-runtimeclk58r4": Phase="Pending", Reason="", readiness=false. Elapsed: 2m12.479232393s Jan 31 23:10:46.363: INFO: Pod "test-runtimeclass-runtimeclass-3304-non-conflict-runtimeclk58r4": Phase="Pending", Reason="", readiness=false. Elapsed: 2m14.479501339s Jan 31 23:10:48.364: INFO: Pod "test-runtimeclass-runtimeclass-3304-non-conflict-runtimeclk58r4": Phase="Pending", Reason="", readiness=false. Elapsed: 2m16.479820837s Jan 31 23:10:50.363: INFO: Pod "test-runtimeclass-runtimeclass-3304-non-conflict-runtimeclk58r4": Phase="Pending", Reason="", readiness=false. Elapsed: 2m18.479611338s Jan 31 23:10:52.364: INFO: Pod "test-runtimeclass-runtimeclass-3304-non-conflict-runtimeclk58r4": Phase="Pending", Reason="", readiness=false. Elapsed: 2m20.480123934s Jan 31 23:10:54.363: INFO: Pod "test-runtimeclass-runtimeclass-3304-non-conflict-runtimeclk58r4": Phase="Pending", Reason="", readiness=false. Elapsed: 2m22.479601515s Jan 31 23:10:56.366: INFO: Pod "test-runtimeclass-runtimeclass-3304-non-conflict-runtimeclk58r4": Phase="Pending", Reason="", readiness=false. Elapsed: 2m24.482320777s Jan 31 23:10:58.365: INFO: Pod "test-runtimeclass-runtimeclass-3304-non-conflict-runtimeclk58r4": Phase="Pending", Reason="", readiness=false. Elapsed: 2m26.480627362s Jan 31 23:11:00.369: INFO: Pod "test-runtimeclass-runtimeclass-3304-non-conflict-runtimeclk58r4": Phase="Pending", Reason="", readiness=false. Elapsed: 2m28.485262013s Jan 31 23:11:02.364: INFO: Pod "test-runtimeclass-runtimeclass-3304-non-conflict-runtimeclk58r4": Phase="Pending", Reason="", readiness=false. Elapsed: 2m30.47973217s Jan 31 23:11:04.364: INFO: Pod "test-runtimeclass-runtimeclass-3304-non-conflict-runtimeclk58r4": Phase="Pending", Reason="", readiness=false. Elapsed: 2m32.48017776s Jan 31 23:11:06.363: INFO: Pod "test-runtimeclass-runtimeclass-3304-non-conflict-runtimeclk58r4": Phase="Pending", Reason="", readiness=false. Elapsed: 2m34.479433216s Jan 31 23:11:08.363: INFO: Pod "test-runtimeclass-runtimeclass-3304-non-conflict-runtimeclk58r4": Phase="Pending", Reason="", readiness=false. Elapsed: 2m36.479501257s Jan 31 23:11:10.363: INFO: Pod "test-runtimeclass-runtimeclass-3304-non-conflict-runtimeclk58r4": Phase="Pending", Reason="", readiness=false. Elapsed: 2m38.479095481s Jan 31 23:11:12.364: INFO: Pod "test-runtimeclass-runtimeclass-3304-non-conflict-runtimeclk58r4": Phase="Pending", Reason="", readiness=false. Elapsed: 2m40.479730375s Jan 31 23:11:14.364: INFO: Pod "test-runtimeclass-runtimeclass-3304-non-conflict-runtimeclk58r4": Phase="Pending", Reason="", readiness=false. Elapsed: 2m42.479765834s Jan 31 23:11:16.364: INFO: Pod "test-runtimeclass-runtimeclass-3304-non-conflict-runtimeclk58r4": Phase="Pending", Reason="", readiness=false. Elapsed: 2m44.480322859s Jan 31 23:11:18.364: INFO: Pod "test-runtimeclass-runtimeclass-3304-non-conflict-runtimeclk58r4": Phase="Pending", Reason="", readiness=false. Elapsed: 2m46.479825936s Jan 31 23:11:20.363: INFO: Pod "test-runtimeclass-runtimeclass-3304-non-conflict-runtimeclk58r4": Phase="Pending", Reason="", readiness=false. Elapsed: 2m48.47931621s Jan 31 23:11:22.363: INFO: Pod "test-runtimeclass-runtimeclass-3304-non-conflict-runtimeclk58r4": Phase="Pending", Reason="", readiness=false. Elapsed: 2m50.479519716s Jan 31 23:11:24.366: INFO: Pod "test-runtimeclass-runtimeclass-3304-non-conflict-runtimeclk58r4": Phase="Pending", Reason="", readiness=false. Elapsed: 2m52.481665443s Jan 31 23:11:26.364: INFO: Pod "test-runtimeclass-runtimeclass-3304-non-conflict-runtimeclk58r4": Phase="Pending", Reason="", readiness=false. Elapsed: 2m54.480175767s Jan 31 23:11:28.364: INFO: Pod "test-runtimeclass-runtimeclass-3304-non-conflict-runtimeclk58r4": Phase="Pending", Reason="", readiness=false. Elapsed: 2m56.4802581s Jan 31 23:11:30.364: INFO: Pod "test-runtimeclass-runtimeclass-3304-non-conflict-runtimeclk58r4": Phase="Pending", Reason="", readiness=false. Elapsed: 2m58.47971679s Jan 31 23:11:32.364: INFO: Pod "test-runtimeclass-runtimeclass-3304-non-conflict-runtimeclk58r4": Phase="Pending", Reason="", readiness=false. Elapsed: 3m0.479787669s Jan 31 23:11:34.365: INFO: Pod "test-runtimeclass-runtimeclass-3304-non-conflict-runtimeclk58r4": Phase="Pending", Reason="", readiness=false. Elapsed: 3m2.48076667s Jan 31 23:11:36.364: INFO: Pod "test-runtimeclass-runtimeclass-3304-non-conflict-runtimeclk58r4": Phase="Pending", Reason="", readiness=false. Elapsed: 3m4.479728247s Jan 31 23:11:38.365: INFO: Pod "test-runtimeclass-runtimeclass-3304-non-conflict-runtimeclk58r4": Phase="Pending", Reason="", readiness=false. Elapsed: 3m6.481512056s Jan 31 23:11:40.364: INFO: Pod "test-runtimeclass-runtimeclass-3304-non-conflict-runtimeclk58r4": Phase="Pending", Reason="", readiness=false. Elapsed: 3m8.480092796s Jan 31 23:11:42.364: INFO: Pod "test-runtimeclass-runtimeclass-3304-non-conflict-runtimeclk58r4": Phase="Pending", Reason="", readiness=false. Elapsed: 3m10.479831384s Jan 31 23:11:44.372: INFO: Pod "test-runtimeclass-runtimeclass-3304-non-conflict-runtimeclk58r4": Phase="Pending", Reason="", readiness=false. Elapsed: 3m12.488545772s Jan 31 23:11:46.363: INFO: Pod "test-runtimeclass-runtimeclass-3304-non-conflict-runtimeclk58r4": Phase="Pending", Reason="", readiness=false. Elapsed: 3m14.479525647s Jan 31 23:11:48.364: INFO: Pod "test-runtimeclass-runtimeclass-3304-non-conflict-runtimeclk58r4": Phase="Pending", Reason="", readiness=false. Elapsed: 3m16.480295703s Jan 31 23:11:50.368: INFO: Pod "test-runtimeclass-runtimeclass-3304-non-conflict-runtimeclk58r4": Phase="Pending", Reason="", readiness=false. Elapsed: 3m18.48367542s Jan 31 23:11:52.372: INFO: Pod "test-runtimeclass-runtimeclass-3304-non-conflict-runtimeclk58r4": Phase="Pending", Reason="", readiness=false. Elapsed: 3m20.487742127s Jan 31 23:11:54.364: INFO: Pod "test-runtimeclass-runtimeclass-3304-non-conflict-runtimeclk58r4": Phase="Pending", Reason="", readiness=false. Elapsed: 3m22.480085768s Jan 31 23:11:56.363: INFO: Pod "test-runtimeclass-runtimeclass-3304-non-conflict-runtimeclk58r4": Phase="Pending", Reason="", readiness=false. Elapsed: 3m24.479440266s Jan 31 23:11:58.370: INFO: Pod "test-runtimeclass-runtimeclass-3304-non-conflict-runtimeclk58r4": Phase="Pending", Reason="", readiness=false. Elapsed: 3m26.485905664s Jan 31 23:12:00.363: INFO: Pod "test-runtimeclass-runtimeclass-3304-non-conflict-runtimeclk58r4": Phase="Pending", Reason="", readiness=false. Elapsed: 3m28.478903963s Jan 31 23:12:02.364: INFO: Pod "test-runtimeclass-runtimeclass-3304-non-conflict-runtimeclk58r4": Phase="Pending", Reason="", readiness=false. Elapsed: 3m30.480523991s Jan 31 23:12:04.363: INFO: Pod "test-runtimeclass-runtimeclass-3304-non-conflict-runtimeclk58r4": Phase="Pending", Reason="", readiness=false. Elapsed: 3m32.47885926s Jan 31 23:12:06.363: INFO: Pod "test-runtimeclass-runtimeclass-3304-non-conflict-runtimeclk58r4": Phase="Pending", Reason="", readiness=false. Elapsed: 3m34.478982243s Jan 31 23:12:08.363: INFO: Pod "test-runtimeclass-runtimeclass-3304-non-conflict-runtimeclk58r4": Phase="Pending", Reason="", readiness=false. Elapsed: 3m36.479352348s Jan 31 23:12:10.367: INFO: Pod "test-runtimeclass-runtimeclass-3304-non-conflict-runtimeclk58r4": Phase="Pending", Reason="", readiness=false. Elapsed: 3m38.48295879s Jan 31 23:12:12.368: INFO: Pod "test-runtimeclass-runtimeclass-3304-non-conflict-runtimeclk58r4": Phase="Pending", Reason="", readiness=false. Elapsed: 3m40.484078386s Jan 31 23:12:14.363: INFO: Pod "test-runtimeclass-runtimeclass-3304-non-conflict-runtimeclk58r4": Phase="Pending", Reason="", readiness=false. Elapsed: 3m42.479583511s Jan 31 23:12:16.363: INFO: Pod "test-runtimeclass-runtimeclass-3304-non-conflict-runtimeclk58r4": Phase="Pending", Reason="", readiness=false. Elapsed: 3m44.479313326s Jan 31 23:12:18.363: INFO: Pod "test-runtimeclass-runtimeclass-3304-non-conflict-runtimeclk58r4": Phase="Pending", Reason="", readiness=false. Elapsed: 3m46.479230492s Jan 31 23:12:20.364: INFO: Pod "test-runtimeclass-runtimeclass-3304-non-conflict-runtimeclk58r4": Phase="Pending", Reason="", readiness=false. Elapsed: 3m48.479712352s Jan 31 23:12:22.365: INFO: Pod "test-runtimeclass-runtimeclass-3304-non-conflict-runtimeclk58r4": Phase="Pending", Reason="", readiness=false. Elapsed: 3m50.481435734s Jan 31 23:12:24.364: INFO: Pod "test-runtimeclass-runtimeclass-3304-non-conflict-runtimeclk58r4": Phase="Pending", Reason="", readiness=false. Elapsed: 3m52.479701718s Jan 31 23:12:26.363: INFO: Pod "test-runtimeclass-runtimeclass-3304-non-conflict-runtimeclk58r4": Phase="Pending", Reason="", readiness=false. Elapsed: 3m54.479095873s Jan 31 23:12:28.363: INFO: Pod "test-runtimeclass-runtimeclass-3304-non-conflict-runtimeclk58r4": Phase="Pending", Reason="", readiness=false. Elapsed: 3m56.479411306s Jan 31 23:12:30.364: INFO: Pod "test-runtimeclass-runtimeclass-3304-non-conflict-runtimeclk58r4": Phase="Pending", Reason="", readiness=false. Elapsed: 3m58.479892313s Jan 31 23:12:32.363: INFO: Pod "test-runtimeclass-runtimeclass-3304-non-conflict-runtimeclk58r4": Phase="Pending", Reason="", readiness=false. Elapsed: 4m0.479209667s Jan 31 23:12:34.364: INFO: Pod "test-runtimeclass-runtimeclass-3304-non-conflict-runtimeclk58r4": Phase="Pending", Reason="", readiness=false. Elapsed: 4m2.479702837s Jan 31 23:12:36.364: INFO: Pod "test-runtimeclass-runtimeclass-3304-non-conflict-runtimeclk58r4": Phase="Pending", Reason="", readiness=false. Elapsed: 4m4.480314445s Jan 31 23:12:38.364: INFO: Pod "test-runtimeclass-runtimeclass-3304-non-conflict-runtimeclk58r4": Phase="Pending", Reason="", readiness=false. Elapsed: 4m6.479674893s Jan 31 23:12:40.364: INFO: Pod "test-runtimeclass-runtimeclass-3304-non-conflict-runtimeclk58r4": Phase="Pending", Reason="", readiness=false. Elapsed: 4m8.480356544s Jan 31 23:12:42.364: INFO: Pod "test-runtimeclass-runtimeclass-3304-non-conflict-runtimeclk58r4": Phase="Pending", Reason="", readiness=false. Elapsed: 4m10.479891626s Jan 31 23:12:44.363: INFO: Pod "test-runtimeclass-runtimeclass-3304-non-conflict-runtimeclk58r4": Phase="Pending", Reason="", readiness=false. Elapsed: 4m12.47909514s Jan 31 23:12:46.364: INFO: Pod "test-runtimeclass-runtimeclass-3304-non-conflict-runtimeclk58r4": Phase="Pending", Reason="", readiness=false. Elapsed: 4m14.479922783s Jan 31 23:12:48.363: INFO: Pod "test-runtimeclass-runtimeclass-3304-non-conflict-runtimeclk58r4": Phase="Pending", Reason="", readiness=false. Elapsed: 4m16.479462392s Jan 31 23:12:50.365: INFO: Pod "test-runtimeclass-runtimeclass-3304-non-conflict-runtimeclk58r4": Phase="Pending", Reason="", readiness=false. Elapsed: 4m18.481271791s Jan 31 23:12:52.365: INFO: Pod "test-runtimeclass-runtimeclass-3304-non-conflict-runtimeclk58r4": Phase="Pending", Reason="", readiness=false. Elapsed: 4m20.480752529s Jan 31 23:12:54.363: INFO: Pod "test-runtimeclass-runtimeclass-3304-non-conflict-runtimeclk58r4": Phase="Pending", Reason="", readiness=false. Elapsed: 4m22.479148276s Jan 31 23:12:56.364: INFO: Pod "test-runtimeclass-runtimeclass-3304-non-conflict-runtimeclk58r4": Phase="Pending", Reason="", readiness=false. Elapsed: 4m24.479631829s Jan 31 23:12:58.364: INFO: Pod "test-runtimeclass-runtimeclass-3304-non-conflict-runtimeclk58r4": Phase="Pending", Reason="", readiness=false. Elapsed: 4m26.47972015s Jan 31 23:13:00.367: INFO: Pod "test-runtimeclass-runtimeclass-3304-non-conflict-runtimeclk58r4": Phase="Pending", Reason="", readiness=false. Elapsed: 4m28.4827059s Jan 31 23:13:02.365: INFO: Pod "test-runtimeclass-runtimeclass-3304-non-conflict-runtimeclk58r4": Phase="Pending", Reason="", readiness=false. Elapsed: 4m30.481365462s Jan 31 23:13:04.363: INFO: Pod "test-runtimeclass-runtimeclass-3304-non-conflict-runtimeclk58r4": Phase="Pending", Reason="", readiness=false. Elapsed: 4m32.479595047s Jan 31 23:13:06.363: INFO: Pod "test-runtimeclass-runtimeclass-3304-non-conflict-runtimeclk58r4": Phase="Pending", Reason="", readiness=false. Elapsed: 4m34.479564126s Jan 31 23:13:08.363: INFO: Pod "test-runtimeclass-runtimeclass-3304-non-conflict-runtimeclk58r4": Phase="Pending", Reason="", readiness=false. Elapsed: 4m36.478924358s Jan 31 23:13:10.366: INFO: Pod "test-runtimeclass-runtimeclass-3304-non-conflict-runtimeclk58r4": Phase="Pending", Reason="", readiness=false. Elapsed: 4m38.482183951s Jan 31 23:13:12.365: INFO: Pod "test-runtimeclass-runtimeclass-3304-non-conflict-runtimeclk58r4": Phase="Pending", Reason="", readiness=false. Elapsed: 4m40.481554643s Jan 31 23:13:14.365: INFO: Pod "test-runtimeclass-runtimeclass-3304-non-conflict-runtimeclk58r4": Phase="Pending", Reason="", readiness=false. Elapsed: 4m42.480941702s Jan 31 23:13:16.363: INFO: Pod "test-runtimeclass-runtimeclass-3304-non-conflict-runtimeclk58r4": Phase="Pending", Reason="", readiness=false. Elapsed: 4m44.479348693s Jan 31 23:13:18.364: INFO: Pod "test-runtimeclass-runtimeclass-3304-non-conflict-runtimeclk58r4": Phase="Pending", Reason="", readiness=false. Elapsed: 4m46.479793232s Jan 31 23:13:20.364: INFO: Pod "test-runtimeclass-runtimeclass-3304-non-conflict-runtimeclk58r4": Phase="Pending", Reason="", readiness=false. Elapsed: 4m48.479936283s Jan 31 23:13:22.364: INFO: Pod "test-runtimeclass-runtimeclass-3304-non-conflict-runtimeclk58r4": Phase="Pending", Reason="", readiness=false. Elapsed: 4m50.479940089s Jan 31 23:13:24.364: INFO: Pod "test-runtimeclass-runtimeclass-3304-non-conflict-runtimeclk58r4": Phase="Pending", Reason="", readiness=false. Elapsed: 4m52.479981728s Jan 31 23:13:26.364: INFO: Pod "test-runtimeclass-runtimeclass-3304-non-conflict-runtimeclk58r4": Phase="Pending", Reason="", readiness=false. Elapsed: 4m54.480066121s Jan 31 23:13:28.363: INFO: Pod "test-runtimeclass-runtimeclass-3304-non-conflict-runtimeclk58r4": Phase="Pending", Reason="", readiness=false. Elapsed: 4m56.479567896s Jan 31 23:13:30.364: INFO: Pod "test-runtimeclass-runtimeclass-3304-non-conflict-runtimeclk58r4": Phase="Pending", Reason="", readiness=false. Elapsed: 4m58.479962229s Jan 31 23:13:32.366: INFO: Pod "test-runtimeclass-runtimeclass-3304-non-conflict-runtimeclk58r4": Phase="Pending", Reason="", readiness=false. Elapsed: 5m0.482190174s Jan 31 23:13:32.605: INFO: Pod "test-runtimeclass-runtimeclass-3304-non-conflict-runtimeclk58r4": Phase="Pending", Reason="", readiness=false. Elapsed: 5m0.721425279s Jan 31 23:13:32.607: INFO: Unexpected error: <*pod.timeoutError | 0xc0055bccf0>: { msg: "timed out while waiting for pod runtimeclass-3304/test-runtimeclass-runtimeclass-3304-non-conflict-runtimeclk58r4 to be not pending", observedObjects: [ <*v1.Pod | 0xc00081c900>{ TypeMeta: {Kind: "", APIVersion: ""}, ObjectMeta: { Name: "test-runtimeclass-runtimeclass-3304-non-conflict-runtimeclk58r4", GenerateName: "test-runtimeclass-runtimeclass-3304-non-conflict-runtimeclass-", Namespace: "runtimeclass-3304", SelfLink: "", UID: "60112b3f-21b3-49ca-b1aa-31cb28c9f0ca", ResourceVersion: "2636", Generation: 0, CreationTimestamp: { Time: { wall: 0, ext: 63810803311, loc: { name: "Local", zone: [ {name: "UTC", offset: 0, isDST: false}, ], tx: [ { when: -9223372036854775808, index: 0, isstd: false, isutc: false, }, ], extend: "UTC0", cacheStart: 9223372036854775807, cacheEnd: 9223372036854775807, cacheZone: {name: "UTC", offset: 0, isDST: false}, }, }, }, DeletionTimestamp: nil, DeletionGracePeriodSeconds: nil, Labels: nil, Annotations: nil, OwnerReferences: nil, Finalizers: nil, ManagedFields: [ { Manager: "e2e.test", Operation: "Update", APIVersion: "v1", Time: { Time: { wall: 0, ext: 63810803311, loc: { name: "Local", zone: [...], tx: [...], extend: "UTC0", cacheStart: 9223372036854775807, cacheEnd: 9223372036854775807, cacheZone: {name: ..., offset: ..., isDST: ...}, }, }, }, FieldsType: "FieldsV1", FieldsV1: { Raw: "{\"f:metadata\":{\"f:generateName\":{}},\"f:spec\":{\"f:automountServiceAccountToken\":{},\"f:containers\":{\"k:{\\\"name\\\":\\\"test\\\"}\":{\".\":{},\"f:command\":{},\"f:image\":{},\"f:imagePullPolicy\":{},\"f:name\":{},\"f:resources\":{},\"f:terminationMessagePath\":{},\"f:terminationMessagePolicy\":{}}},\"f:dnsPolicy\":{},\"f:enableServiceLinks\":{},\"f:nodeSelector\":{},\"f:restartPolicy\":{},\"f:runtimeClassName\":{},\"f:schedulerName\":{},\"f:securityContext\":{},\"f:terminationGracePeriodSeconds\":{}}}", }, Subresource: "", }, { Manager: "kubelet", Operation: "Update", APIVersion: "v1", Time: { Time: { wall: 0, ext: 63810803311, loc: { name: "Local", zone: [...], tx: [...], extend: "UTC0", cacheStart: 9223372036854775807, cacheEnd: 9223372036854775807, cacheZone: {name: ..., offset: ..., isDST: ...}, }, }, }, FieldsType: "FieldsV1", FieldsV1: { Raw: "{\"f:status\":{\"f:conditions\":{\"k:{\\\"type\\\":\\\"ContainersReady\\\"}\":{\".\":{},\"f:lastProbeTime\":{},\"f:lastTransitionTime\":{},\"f:message\":{},\"f:reason\":{},\"f:status\":{},\"f:type\":{}},\"k:{\\\"type\\\":\\\"Initialized\\\"}\":{\".\":{},\"f:lastProbeTime\":{},\"f:lastTransitionTime\":{},\"f:status\":{},\"f:type\":{}},\"k:{\\\"type\\\":\\\"Ready\\\"}\":{\".\":{},\"f:lastProbeTime\":{},\"f:lastTransitionTime\":{},\"f:message\":{},\"f:reason\":{},\"f:status\":{},\"f:type\":{}}},\"f:containerStatuses\":{},\"f:hostIP\":{},\"f:startTime\":{}}}", }, Subresource: "status", }, ], }, Spec: { Volumes: nil, InitContainers: nil, Containers: [ { Name: "test", Image: "registry.k8s.io/e2e-test-images/busybox:1.29-4", Command: ["true"], Args: nil, WorkingDir: "", Ports: nil, EnvFrom: nil, Env: nil, Resources: {Limits: nil, Requests: nil, Claims: nil}, VolumeMounts: nil, VolumeDevices: nil, LivenessProbe: nil, ReadinessProbe: nil, StartupProbe: nil, Lifecycle: nil, TerminationMessagePath: "/dev/termination-log", TerminationMessagePolicy: "File", ImagePullPolicy: "IfNotPresent", SecurityContext: nil, Stdin: false, StdinOnce: false, TTY: false, }, ], EphemeralContainers: nil, RestartPolicy: "Never", TerminationGracePeriodSeconds: 30, ActiveDeadlineSeconds: nil, DNSPolicy: "ClusterFirst", NodeSelector: { "fizz-03a4514a-9caf-4dde-975c-666ffcff9781": "buzz", "foo-5a6a5824-62ca-418b-8190-70a85b2542a5": "bar", }, ServiceAccountName: "default", DeprecatedServiceAccount: "default", AutomountServiceAccountToken: false, NodeName: "i-0b2bb17140492f5c5", HostNetwork: false, HostPID: false, HostIPC: false, ShareProcessNamespace: nil, SecurityContext: { SELinuxOptions: nil, WindowsOptions: nil, RunAsUser: nil, RunAsGroup: nil, RunAsNonRoot: nil, ... Gomega truncated this representation as it exceeds 'format.MaxLength'. Consider having the object provide a custom 'GomegaStringer' representation or adjust the parameters in Gomega's 'format' package. Learn more here: https://onsi.github.io/gomega/#adjusting-output [FAILED] timed out while waiting for pod runtimeclass-3304/test-runtimeclass-runtimeclass-3304-non-conflict-runtimeclk58r4 to be not pending In [It] at: test/e2e/node/runtimeclass.go:167 @ 01/31/23 23:13:32.607 < Exit [It] should run a Pod requesting a RuntimeClass with scheduling without taints - test/e2e/node/runtimeclass.go:131 @ 01/31/23 23:13:32.607 (5m14.166s) > Enter [AfterEach] [sig-node] RuntimeClass - test/e2e/framework/node/init/init.go:33 @ 01/31/23 23:13:32.607 Jan 31 23:13:32.607: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready < Exit [AfterEach] [sig-node] RuntimeClass - test/e2e/framework/node/init/init.go:33 @ 01/31/23 23:13:33.087 (479ms) > Enter [DeferCleanup (Each)] [sig-node] RuntimeClass - test/e2e/node/runtimeclass.go:152 @ 01/31/23 23:13:33.087 STEP: removing the label fizz-03a4514a-9caf-4dde-975c-666ffcff9781 off the node i-0b2bb17140492f5c5 - test/e2e/framework/node/helper.go:72 @ 01/31/23 23:13:33.087 STEP: verifying the node doesn't have the label fizz-03a4514a-9caf-4dde-975c-666ffcff9781 - test/e2e/framework/node/helper.go:75 @ 01/31/23 23:13:33.574 < Exit [DeferCleanup (Each)] [sig-node] RuntimeClass - test/e2e/node/runtimeclass.go:152 @ 01/31/23 23:13:33.815 (728ms) > Enter [DeferCleanup (Each)] [sig-node] RuntimeClass - test/e2e/node/runtimeclass.go:152 @ 01/31/23 23:13:33.815 STEP: removing the label foo-5a6a5824-62ca-418b-8190-70a85b2542a5 off the node i-0b2bb17140492f5c5 - test/e2e/framework/node/helper.go:72 @ 01/31/23 23:13:33.815 STEP: verifying the node doesn't have the label foo-5a6a5824-62ca-418b-8190-70a85b2542a5 - test/e2e/framework/node/helper.go:75 @ 01/31/23 23:13:34.303 < Exit [DeferCleanup (Each)] [sig-node] RuntimeClass - test/e2e/node/runtimeclass.go:152 @ 01/31/23 23:13:34.542 (727ms) > Enter [DeferCleanup (Each)] [sig-node] RuntimeClass - test/e2e/framework/node/runtimeclass/runtimeclass.go:64 @ 01/31/23 23:13:34.542 STEP: Deleting pod hostexec-i-0f27661869b93b9cf-4lr6w in namespace runtimeclass-3304 - test/e2e/framework/pod/delete.go:41 @ 01/31/23 23:13:34.542 < Exit [DeferCleanup (Each)] [sig-node] RuntimeClass - test/e2e/framework/node/runtimeclass/runtimeclass.go:64 @ 01/31/23 23:13:34.791 (249ms) > Enter [DeferCleanup (Each)] [sig-node] RuntimeClass - test/e2e/framework/metrics/init/init.go:35 @ 01/31/23 23:13:34.791 < Exit [DeferCleanup (Each)] [sig-node] RuntimeClass - test/e2e/framework/metrics/init/init.go:35 @ 01/31/23 23:13:34.791 (0s) > Enter [DeferCleanup (Each)] [sig-node] RuntimeClass - dump namespaces | framework.go:206 @ 01/31/23 23:13:34.791 STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/31/23 23:13:34.791 STEP: Collecting events from namespace "runtimeclass-3304". - test/e2e/framework/debug/dump.go:42 @ 01/31/23 23:13:34.791 STEP: Found 14 events. - test/e2e/framework/debug/dump.go:46 @ 01/31/23 23:13:35.031 Jan 31 23:13:35.031: INFO: At 2023-01-31 23:08:19 +0000 UTC - event for hostexec-i-0f27661869b93b9cf-4lr6w: {default-scheduler } Scheduled: Successfully assigned runtimeclass-3304/hostexec-i-0f27661869b93b9cf-4lr6w to i-0f27661869b93b9cf Jan 31 23:13:35.031: INFO: At 2023-01-31 23:08:19 +0000 UTC - event for hostexec-i-0f27661869b93b9cf-4lr6w: {kubelet i-0f27661869b93b9cf} Pulling: Pulling image "registry.k8s.io/e2e-test-images/agnhost:2.43" Jan 31 23:13:35.031: INFO: At 2023-01-31 23:08:22 +0000 UTC - event for hostexec-i-0f27661869b93b9cf-4lr6w: {kubelet i-0f27661869b93b9cf} Pulled: Successfully pulled image "registry.k8s.io/e2e-test-images/agnhost:2.43" in 2.850714765s (2.850729854s including waiting) Jan 31 23:13:35.031: INFO: At 2023-01-31 23:08:22 +0000 UTC - event for hostexec-i-0f27661869b93b9cf-4lr6w: {kubelet i-0f27661869b93b9cf} Created: Created container agnhost-container Jan 31 23:13:35.031: INFO: At 2023-01-31 23:08:22 +0000 UTC - event for hostexec-i-0f27661869b93b9cf-4lr6w: {kubelet i-0f27661869b93b9cf} Started: Started container agnhost-container Jan 31 23:13:35.031: INFO: At 2023-01-31 23:08:25 +0000 UTC - event for without-label: {kubelet i-0b2bb17140492f5c5} Pulling: Pulling image "registry.k8s.io/pause:3.9" Jan 31 23:13:35.031: INFO: At 2023-01-31 23:08:25 +0000 UTC - event for without-label: {default-scheduler } Scheduled: Successfully assigned runtimeclass-3304/without-label to i-0b2bb17140492f5c5 Jan 31 23:13:35.031: INFO: At 2023-01-31 23:08:27 +0000 UTC - event for without-label: {kubelet i-0b2bb17140492f5c5} Pulled: Successfully pulled image "registry.k8s.io/pause:3.9" in 1.67286797s (1.680186592s including waiting) Jan 31 23:13:35.031: INFO: At 2023-01-31 23:08:27 +0000 UTC - event for without-label: {kubelet i-0b2bb17140492f5c5} Created: Created container without-label Jan 31 23:13:35.031: INFO: At 2023-01-31 23:08:27 +0000 UTC - event for without-label: {kubelet i-0b2bb17140492f5c5} Started: Started container without-label Jan 31 23:13:35.031: INFO: At 2023-01-31 23:08:30 +0000 UTC - event for without-label: {kubelet i-0b2bb17140492f5c5} Killing: Stopping container without-label Jan 31 23:13:35.031: INFO: At 2023-01-31 23:08:31 +0000 UTC - event for test-runtimeclass-runtimeclass-3304-non-conflict-runtimeclk58r4: {default-scheduler } Scheduled: Successfully assigned runtimeclass-3304/test-runtimeclass-runtimeclass-3304-non-conflict-runtimeclk58r4 to i-0b2bb17140492f5c5 Jan 31 23:13:35.031: INFO: At 2023-01-31 23:08:31 +0000 UTC - event for test-runtimeclass-runtimeclass-3304-non-conflict-runtimeclk58r4: {kubelet i-0b2bb17140492f5c5} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to get sandbox runtime: no runtime for "test-handler" is configured Jan 31 23:13:35.031: INFO: At 2023-01-31 23:13:34 +0000 UTC - event for hostexec-i-0f27661869b93b9cf-4lr6w: {kubelet i-0f27661869b93b9cf} Killing: Stopping container agnhost-container Jan 31 23:13:35.270: INFO: POD NODE PHASE GRACE CONDITIONS Jan 31 23:13:35.270: INFO: test-runtimeclass-runtimeclass-3304-non-conflict-runtimeclk58r4 i-0b2bb17140492f5c5 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 23:08:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-31 23:08:31 +0000 UTC ContainersNotReady containers with unready status: [test]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-31 23:08:31 +0000 UTC ContainersNotReady containers with unready status: [test]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-31 23:08:31 +0000 UTC }] Jan 31 23:13:35.270: INFO: Jan 31 23:13:35.728: INFO: Unable to fetch runtimeclass-3304/test-runtimeclass-runtimeclass-3304-non-conflict-runtimeclk58r4/test logs: the server rejected our request for an unknown reason (get pods test-runtimeclass-runtimeclass-3304-non-conflict-runtimeclk58r4) Jan 31 23:13:35.972: INFO: Logging node info for node i-01f2014d35b8c12f6 Jan 31 23:13:36.212: INFO: Node Info: &Node{ObjectMeta:{i-01f2014d35b8c12f6 3ed71f02-c5f9-4792-b864-3ebf0393fb20 6018 0 2023-01-31 23:02:06 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:arm64 beta.kubernetes.io/instance-type:c6g.large beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:ap-south-1 failure-domain.beta.kubernetes.io/zone:ap-south-1a kops.k8s.io/kops-controller-pki: kubernetes.io/arch:arm64 kubernetes.io/hostname:i-01f2014d35b8c12f6 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node.kubernetes.io/exclude-from-external-load-balancers: node.kubernetes.io/instance-type:c6g.large topology.ebs.csi.aws.com/zone:ap-south-1a topology.kubernetes.io/region:ap-south-1 topology.kubernetes.io/zone:ap-south-1a] map[csi.volume.kubernetes.io/nodeid:{"ebs.csi.aws.com":"i-01f2014d35b8c12f6"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-31 23:02:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}}} } {protokube Update v1 2023-01-31 23:02:36 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kops.k8s.io/kops-controller-pki":{},"f:node-role.kubernetes.io/control-plane":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}} } {aws-cloud-controller-manager Update v1 2023-01-31 23:03:28 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:beta.kubernetes.io/instance-type":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:taints":{}}} } {aws-cloud-controller-manager Update v1 2023-01-31 23:03:28 +0000 UTC FieldsV1 {"f:status":{"f:addresses":{"k:{\"type\":\"ExternalDNS\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"ExternalIP\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"Hostname\"}":{"f:address":{}},"k:{\"type\":\"InternalDNS\"}":{".":{},"f:address":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2023-01-31 23:03:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.0.0/24\"":{}}}} } {kubelet Update v1 2023-01-31 23:09:47 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.0.0/24,DoNotUseExternalID:,ProviderID:aws:///ap-south-1a/i-01f2014d35b8c12f6,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/control-plane,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[100.96.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{49756430336 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},hugepages-32Mi: {{0 0} {<nil>} 0 DecimalSI},hugepages-64Ki: {{0 0} {<nil>} 0 DecimalSI},memory: {{4002136064 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{44780787229 0} {<nil>} 44780787229 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},hugepages-32Mi: {{0 0} {<nil>} 0 DecimalSI},hugepages-64Ki: {{0 0} {<nil>} 0 DecimalSI},memory: {{3897278464 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-31 23:09:47 +0000 UTC,LastTransitionTime:2023-01-31 23:02:03 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-31 23:09:47 +0000 UTC,LastTransitionTime:2023-01-31 23:02:03 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-31 23:09:47 +0000 UTC,LastTransitionTime:2023-01-31 23:02:03 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-31 23:09:47 +0000 UTC,LastTransitionTime:2023-01-31 23:03:28 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.57.226,},NodeAddress{Type:ExternalIP,Address:13.234.119.60,},NodeAddress{Type:InternalDNS,Address:i-01f2014d35b8c12f6.ap-south-1.compute.internal,},NodeAddress{Type:Hostname,Address:i-01f2014d35b8c12f6.ap-south-1.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-13-234-119-60.ap-south-1.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec2b7a454b38daa04b90c6fd457c5ab6,SystemUUID:ec2b7a45-4b38-daa0-4b90-c6fd457c5ab6,BootID:9e578a7e-09ed-4dad-9613-7d6fc65f5c24,KernelVersion:5.15.0-1028-aws,OSImage:Ubuntu 22.04.1 LTS,ContainerRuntimeVersion:containerd://1.6.16,KubeletVersion:v1.27.0-alpha.1.161+c4ebbeeb747cd3,KubeProxyVersion:v1.27.0-alpha.1.161+c4ebbeeb747cd3,OperatingSystem:linux,Architecture:arm64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/etcd@sha256:9ce33ba33d8e738a5b85ed50b5080ac746deceed4a7496c550927a7a19ca3b6d registry.k8s.io/etcd:3.5.0-0],SizeBytes:157797353,},ContainerImage{Names:[quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5 quay.io/cilium/cilium:v1.12.5],SizeBytes:157636062,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:4afb99b4690b418ffc2ceb67e1a17376457e441c1f09ab55447f0aaf992fa646 registry.k8s.io/etcd:3.4.3-0],SizeBytes:151677080,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 registry.k8s.io/etcd:3.4.13-0],SizeBytes:134531559,},ContainerImage{Names:[registry.k8s.io/kube-apiserver-arm64:v1.27.0-alpha.1.161_c4ebbeeb747cd3],SizeBytes:131348942,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:0d2ab6b15b4fe0b903b9e81192d942fb39c29be61fe341cd16db5d0165b0435c registry.k8s.io/etcd:3.3.17-0],SizeBytes:126979107,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-arm64:v1.27.0-alpha.1.161_c4ebbeeb747cd3],SizeBytes:121191220,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:17da501f5d2a675be46040422a27b7cc21b8a43895ac998b171db1c346f361f7 registry.k8s.io/etcd:3.3.10-0],SizeBytes:81436097,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:6f72b851544986cb0921b53ea655ec04c36131248f16d4ad110cb3ca0c369dc1 registry.k8s.io/etcd:3.5.4-0],SizeBytes:81120118,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:13f53ed1d91e2e11aac476ee9a0269fdda6cc4874eba903efd40daf50c55eee5 registry.k8s.io/etcd:3.5.3-0],SizeBytes:81114891,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83 registry.k8s.io/etcd:3.5.7-0],SizeBytes:80665728,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c registry.k8s.io/etcd:3.5.6-0],SizeBytes:80539316,},ContainerImage{Names:[registry.k8s.io/kube-proxy-arm64:v1.27.0-alpha.1.161_c4ebbeeb747cd3],SizeBytes:63181512,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:905d7ca17fd02bc24c0eba9a062753aba15db3e31422390bc3238eb762339b20 registry.k8s.io/etcd:3.2.24-1],SizeBytes:61500433,},ContainerImage{Names:[registry.k8s.io/etcdadm/etcd-manager@sha256:81677375343460c8b223d5217ed2ee037d2578b470b3260637a735f57e5bdf04 registry.k8s.io/etcdadm/etcd-manager:v3.0.20230119-slim],SizeBytes:60827222,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:64b9ea357325d5db9f8a723dcf503b5a449177b17ac87d69481e126bb724c263 registry.k8s.io/etcd:3.5.1-0],SizeBytes:59553904,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-arm64:v1.27.0-alpha.1.161_c4ebbeeb747cd3],SizeBytes:56048435,},ContainerImage{Names:[registry.k8s.io/kops/kops-controller:1.27.0-alpha.1],SizeBytes:41210835,},ContainerImage{Names:[registry.k8s.io/kops/dns-controller:1.27.0-alpha.1],SizeBytes:39793999,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:f0c5de192d832e7c1daa6580d4a62e8fa6fc8eabc0917ae4cb7ed4d15e95b59e registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.1],SizeBytes:28932904,},ContainerImage{Names:[quay.io/cilium/operator@sha256:a6d24a006a6b92967ac90786b49bc1ac26e5477cf028cd1186efcfc2466484db quay.io/cilium/operator:v1.12.5],SizeBytes:24567287,},ContainerImage{Names:[registry.k8s.io/provider-aws/cloud-controller-manager@sha256:fdeb61e3e42ecd9cca868d550ebdb88dd6341d9e91fcfa9a37e227dab2ad22cb registry.k8s.io/provider-aws/cloud-controller-manager:v1.26.0],SizeBytes:18345226,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:8502345,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:7685305,},ContainerImage{Names:[registry.k8s.io/kops/kube-apiserver-healthcheck:1.27.0-alpha.1],SizeBytes:4691082,},ContainerImage{Names:[registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause:3.6],SizeBytes:253553,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 31 23:13:36.212: INFO: Logging kubelet events for node i-01f2014d35b8c12f6 Jan 31 23:13:36.456: INFO: Logging pods the kubelet thinks is on node i-01f2014d35b8c12f6 Jan 31 23:13:36.942: INFO: dns-controller-85d56fc6db-b6qcv started at 2023-01-31 23:03:00 +0000 UTC (0+1 container statuses recorded) Jan 31 23:13:36.942: INFO: Container dns-controller ready: true, restart count 0 Jan 31 23:13:36.942: INFO: kops-controller-jjk5s started at 2023-01-31 23:03:00 +0000 UTC (0+1 container statuses recorded) Jan 31 23:13:36.942: INFO: Container kops-controller ready: true, restart count 0 Jan 31 23:13:36.942: INFO: aws-cloud-controller-manager-vwrlq started at 2023-01-31 23:03:00 +0000 UTC (0+1 container statuses recorded) Jan 31 23:13:36.942: INFO: Container aws-cloud-controller-manager ready: true, restart count 0 Jan 31 23:13:36.942: INFO: etcd-manager-events-i-01f2014d35b8c12f6 started at 2023-01-31 23:00:08 +0000 UTC (11+1 container statuses recorded) Jan 31 23:13:36.942: INFO: Init container init-etcd-3-2-24 ready: true, restart count 0 Jan 31 23:13:36.942: INFO: Init container init-etcd-3-3-10 ready: true, restart count 0 Jan 31 23:13:36.942: INFO: Init container init-etcd-3-3-17 ready: true, restart count 0 Jan 31 23:13:36.942: INFO: Init container init-etcd-3-4-13 ready: true, restart count 0 Jan 31 23:13:36.942: INFO: Init container init-etcd-3-4-3 ready: true, restart count 0 Jan 31 23:13:36.942: INFO: Init container init-etcd-3-5-0 ready: true, restart count 0 Jan 31 23:13:36.942: INFO: Init container init-etcd-3-5-1 ready: true, restart count 0 Jan 31 23:13:36.942: INFO: Init container init-etcd-3-5-3 ready: true, restart count 0 Jan 31 23:13:36.942: INFO: Init container init-etcd-3-5-4 ready: true, restart count 0 Jan 31 23:13:36.942: INFO: Init container init-etcd-3-5-6 ready: true, restart count 0 Jan 31 23:13:36.942: INFO: Init container init-etcd-3-5-7 ready: true, restart count 0 Jan 31 23:13:36.942: INFO: Container etcd-manager ready: true, restart count 0 Jan 31 23:13:36.942: INFO: ebs-csi-node-8r2xx started at 2023-01-31 23:03:00 +0000 UTC (0+3 container statuses recorded) Jan 31 23:13:36.942: INFO: Container ebs-plugin ready: true, restart count 0 Jan 31 23:13:36.942: INFO: Container liveness-probe ready: true, restart count 0 Jan 31 23:13:36.942: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 31 23:13:36.942: INFO: cilium-operator-6f7d7f696f-2kr6d started at 2023-01-31 23:03:00 +0000 UTC (0+1 container statuses recorded) Jan 31 23:13:36.942: INFO: Container cilium-operator ready: true, restart count 0 Jan 31 23:13:36.942: INFO: kube-scheduler-i-01f2014d35b8c12f6 started at 2023-01-31 23:00:08 +0000 UTC (0+1 container statuses recorded) Jan 31 23:13:36.942: INFO: Container kube-scheduler ready: true, restart count 0 Jan 31 23:13:36.942: INFO: cilium-t6nbx started at 2023-01-31 23:03:00 +0000 UTC (1+1 container statuses recorded) Jan 31 23:13:36.942: INFO: Init container clean-cilium-state ready: true, restart count 0 Jan 31 23:13:36.942: INFO: Container cilium-agent ready: true, restart count 0 Jan 31 23:13:36.942: INFO: etcd-manager-main-i-01f2014d35b8c12f6 started at 2023-01-31 23:00:08 +0000 UTC (11+1 container statuses recorded) Jan 31 23:13:36.942: INFO: Init container init-etcd-3-2-24 ready: true, restart count 0 Jan 31 23:13:36.942: INFO: Init container init-etcd-3-3-10 ready: true, restart count 0 Jan 31 23:13:36.942: INFO: Init container init-etcd-3-3-17 ready: true, restart count 0 Jan 31 23:13:36.942: INFO: Init container init-etcd-3-4-13 ready: true, restart count 0 Jan 31 23:13:36.942: INFO: Init container init-etcd-3-4-3 ready: true, restart count 0 Jan 31 23:13:36.942: INFO: Init container init-etcd-3-5-0 ready: true, restart count 0 Jan 31 23:13:36.942: INFO: Init container init-etcd-3-5-1 ready: true, restart count 0 Jan 31 23:13:36.942: INFO: Init container init-etcd-3-5-3 ready: true, restart count 0 Jan 31 23:13:36.942: INFO: Init container init-etcd-3-5-4 ready: true, restart count 0 Jan 31 23:13:36.942: INFO: Init container init-etcd-3-5-6 ready: true, restart count 0 Jan 31 23:13:36.942: INFO: Init container init-etcd-3-5-7 ready: true, restart count 0 Jan 31 23:13:36.942: INFO: Container etcd-manager ready: true, restart count 0 Jan 31 23:13:36.942: INFO: kube-apiserver-i-01f2014d35b8c12f6 started at 2023-01-31 23:00:08 +0000 UTC (0+2 container statuses recorded) Jan 31 23:13:36.942: INFO: Container healthcheck ready: true, restart count 0 Jan 31 23:13:36.942: INFO: Container kube-apiserver ready: true, restart count 3 Jan 31 23:13:36.942: INFO: kube-controller-manager-i-01f2014d35b8c12f6 started at 2023-01-31 23:00:08 +0000 UTC (0+1 container statuses recorded) Jan 31 23:13:36.942: INFO: Container kube-controller-manager ready: true, restart count 4 Jan 31 23:13:37.737: INFO: Latency metrics for node i-01f2014d35b8c12f6 Jan 31 23:13:37.737: INFO: Logging node info for node i-06cd6913614b82fae Jan 31 23:13:37.977: INFO: Node Info: &Node{ObjectMeta:{i-06cd6913614b82fae 8e138151-af54-4408-a2ea-d3335d4fe064 14829 0 2023-01-31 23:04:37 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:arm64 beta.kubernetes.io/instance-type:t4g.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:ap-south-1 failure-domain.beta.kubernetes.io/zone:ap-south-1a kubernetes.io/arch:arm64 kubernetes.io/hostname:i-06cd6913614b82fae kubernetes.io/os:linux node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t4g.medium topology.ebs.csi.aws.com/zone:ap-south-1a topology.hostpath.csi/node:i-06cd6913614b82fae topology.kubernetes.io/region:ap-south-1 topology.kubernetes.io/zone:ap-south-1a] map[csi.volume.kubernetes.io/nodeid:{"ebs.csi.aws.com":"i-06cd6913614b82fae"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{aws-cloud-controller-manager Update v1 2023-01-31 23:04:37 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:beta.kubernetes.io/instance-type":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {aws-cloud-controller-manager Update v1 2023-01-31 23:04:37 +0000 UTC FieldsV1 {"f:status":{"f:addresses":{"k:{\"type\":\"ExternalDNS\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"ExternalIP\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"Hostname\"}":{"f:address":{}},"k:{\"type\":\"InternalDNS\"}":{".":{},"f:address":{},"f:type":{}}}}} status} {kops-controller Update v1 2023-01-31 23:04:37 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:node-role.kubernetes.io/node":{}}}} } {kube-controller-manager Update v1 2023-01-31 23:04:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.3.0/24\"":{}}}} } {kubelet Update v1 2023-01-31 23:04:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}}} } {kube-controller-manager Update v1 2023-01-31 23:13:12 +0000 UTC FieldsV1 {"f:status":{"f:volumesAttached":{}}} status} {kubelet Update v1 2023-01-31 23:13:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.3.0/24,DoNotUseExternalID:,ProviderID:aws:///ap-south-1a/i-06cd6913614b82fae,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{49756430336 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},hugepages-32Mi: {{0 0} {<nil>} 0 DecimalSI},hugepages-64Ki: {{0 0} {<nil>} 0 DecimalSI},memory: {{4027252736 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{44780787229 0} {<nil>} 44780787229 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},hugepages-32Mi: {{0 0} {<nil>} 0 DecimalSI},hugepages-64Ki: {{0 0} {<nil>} 0 DecimalSI},memory: {{3922395136 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-31 23:13:17 +0000 UTC,LastTransitionTime:2023-01-31 23:04:37 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-31 23:13:17 +0000 UTC,LastTransitionTime:2023-01-31 23:04:37 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-31 23:13:17 +0000 UTC,LastTransitionTime:2023-01-31 23:04:37 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-31 23:13:17 +0000 UTC,LastTransitionTime:2023-01-31 23:04:52 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.58.35,},NodeAddress{Type:ExternalIP,Address:13.234.59.214,},NodeAddress{Type:InternalDNS,Address:i-06cd6913614b82fae.ap-south-1.compute.internal,},NodeAddress{Type:Hostname,Address:i-06cd6913614b82fae.ap-south-1.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-13-234-59-214.ap-south-1.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec2fe6d62cd03fa1e9a0168b7ed51f3d,SystemUUID:ec2fe6d6-2cd0-3fa1-e9a0-168b7ed51f3d,BootID:489a2ef8-6182-4966-b727-c39ed14a8bfb,KernelVersion:5.15.0-1028-aws,OSImage:Ubuntu 22.04.1 LTS,ContainerRuntimeVersion:containerd://1.6.16,KubeletVersion:v1.27.0-alpha.1.161+c4ebbeeb747cd3,KubeProxyVersion:v1.27.0-alpha.1.161+c4ebbeeb747cd3,OperatingSystem:linux,Architecture:arm64,},Images:[]ContainerImage{ContainerImage{Names:[quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5 quay.io/cilium/cilium:v1.12.5],SizeBytes:157636062,},ContainerImage{Names:[registry.k8s.io/kube-proxy-arm64:v1.27.0-alpha.1.161_c4ebbeeb747cd3],SizeBytes:63181512,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:49808364,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40343601,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:f0c5de192d832e7c1daa6580d4a62e8fa6fc8eabc0917ae4cb7ed4d15e95b59e registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.1],SizeBytes:28932904,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:23945474,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:22678632,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:22434180,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:22394501,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18267611,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:8502345,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8223819,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:7685305,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:db048754ae68ae337d8fa96494c96d2a1204c3320f5dcf7e8e71085adec85da6 registry.k8s.io/e2e-test-images/nginx:1.15-4],SizeBytes:6883347,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:5c99cf6a02adda929b10321dbf4ecfa00d87be9ba4fb456006237d530ab4baa1 registry.k8s.io/e2e-test-images/nginx:1.14-4],SizeBytes:6860089,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:773367,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:772933,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:268051,},ContainerImage{Names:[registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause:3.6],SizeBytes:253553,},},VolumesInUse:[kubernetes.io/csi/ebs.csi.aws.com^vol-06ff5fb56e791d64c kubernetes.io/csi/ebs.csi.aws.com^vol-09bda57513c546a4e],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/ebs.csi.aws.com^vol-06ff5fb56e791d64c,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/ebs.csi.aws.com^vol-09bda57513c546a4e,DevicePath:,},},Config:nil,},} Jan 31 23:13:37.977: INFO: Logging kubelet events for node i-06cd6913614b82fae Jan 31 23:13:38.220: INFO: Logging pods the kubelet thinks is on node i-06cd6913614b82fae Jan 31 23:13:38.493: INFO: pod-submit-remove-3a53f103-7231-4e4e-a6b3-df1366aff218 started at <nil> (0+0 container statuses recorded) Jan 31 23:13:38.493: INFO: ebs-csi-node-cbq6c started at 2023-01-31 23:04:38 +0000 UTC (0+3 container statuses recorded) Jan 31 23:13:38.493: INFO: Container ebs-plugin ready: true, restart count 0 Jan 31 23:13:38.493: INFO: Container liveness-probe ready: true, restart count 0 Jan 31 23:13:38.493: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 31 23:13:38.493: INFO: hostexec-i-06cd6913614b82fae-xqv2r started at 2023-01-31 23:13:19 +0000 UTC (0+1 container statuses recorded) Jan 31 23:13:38.493: INFO: Container agnhost-container ready: true, restart count 0 Jan 31 23:13:38.493: INFO: busybox-readonly-false-85c350b5-6729-48ba-974f-49643d6e8a8f started at 2023-01-31 23:13:20 +0000 UTC (0+1 container statuses recorded) Jan 31 23:13:38.493: INFO: Container busybox-readonly-false-85c350b5-6729-48ba-974f-49643d6e8a8f ready: false, restart count 0 Jan 31 23:13:38.493: INFO: ss2-0 started at 2023-01-31 23:13:21 +0000 UTC (0+1 container statuses recorded) Jan 31 23:13:38.493: INFO: Container webserver ready: true, restart count 0 Jan 31 23:13:38.493: INFO: netserver-0 started at <nil> (0+0 container statuses recorded) Jan 31 23:13:38.493: INFO: webserver-6bb8846c6d-v9cgn started at 2023-01-31 23:13:10 +0000 UTC (0+1 container statuses recorded) Jan 31 23:13:38.493: INFO: Container httpd ready: true, restart count 0 Jan 31 23:13:38.493: INFO: pod-terminate-status-0-8 started at <nil> (0+0 container statuses recorded) Jan 31 23:13:38.493: INFO: test-ss-1 started at <nil> (0+0 container statuses recorded) Jan 31 23:13:38.493: INFO: ss2-1 started at 2023-01-31 23:13:10 +0000 UTC (0+1 container statuses recorded) Jan 31 23:13:38.493: INFO: Container webserver ready: true, restart count 0 Jan 31 23:13:38.493: INFO: cilium-7v65m started at 2023-01-31 23:04:38 +0000 UTC (1+1 container statuses recorded) Jan 31 23:13:38.493: INFO: Init container clean-cilium-state ready: true, restart count 0 Jan 31 23:13:38.493: INFO: Container cilium-agent ready: true, restart count 0 Jan 31 23:13:38.493: INFO: pod-7e016aba-3062-41b4-b3d7-1c46b66a071d started at 2023-01-31 23:13:09 +0000 UTC (0+1 container statuses recorded) Jan 31 23:13:38.493: INFO: Container write-pod ready: true, restart count 0 Jan 31 23:13:38.493: INFO: netserver-0 started at 2023-01-31 23:12:37 +0000 UTC (0+1 container statuses recorded) Jan 31 23:13:38.493: INFO: Container webserver ready: true, restart count 0 Jan 31 23:13:38.493: INFO: pod-projected-secrets-887308d5-ee22-4763-a844-4787feaff5c8 started at 2023-01-31 23:12:49 +0000 UTC (0+3 container statuses recorded) Jan 31 23:13:38.493: INFO: Container creates-volume-test ready: true, restart count 0 Jan 31 23:13:38.493: INFO: Container dels-volume-test ready: true, restart count 0 Jan 31 23:13:38.493: INFO: Container upds-volume-test ready: true, restart count 0 Jan 31 23:13:38.493: INFO: test-pod started at 2023-01-31 23:13:24 +0000 UTC (0+3 container statuses recorded) Jan 31 23:13:38.493: INFO: Container busybox-1 ready: true, restart count 0 Jan 31 23:13:38.493: INFO: Container busybox-2 ready: true, restart count 0 Jan 31 23:13:38.493: INFO: Container busybox-3 ready: true, restart count 0 Jan 31 23:13:39.591: INFO: Latency metrics for node i-06cd6913614b82fae Jan 31 23:13:39.591: INFO: Logging node info for node i-0b2bb17140492f5c5 Jan 31 23:13:39.831: INFO: Node Info: &Node{ObjectMeta:{i-0b2bb17140492f5c5 83b3f00d-e9f5-4159-b084-b1df88670707 15547 0 2023-01-31 23:04:57 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:arm64 beta.kubernetes.io/instance-type:t4g.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:ap-south-1 failure-domain.beta.kubernetes.io/zone:ap-south-1a kubernetes.io/arch:arm64 kubernetes.io/hostname:i-0b2bb17140492f5c5 kubernetes.io/os:linux node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t4g.medium topology.ebs.csi.aws.com/zone:ap-south-1a topology.hostpath.csi/node:i-0b2bb17140492f5c5 topology.kubernetes.io/region:ap-south-1 topology.kubernetes.io/zone:ap-south-1a] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-expansion-5248":"csi-mock-csi-mock-volumes-expansion-5248","csi-mock-csi-mock-volumes-workload-5163":"i-0b2bb17140492f5c5","ebs.csi.aws.com":"i-0b2bb17140492f5c5"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-31 23:04:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}}} } {aws-cloud-controller-manager Update v1 2023-01-31 23:04:58 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:beta.kubernetes.io/instance-type":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {aws-cloud-controller-manager Update v1 2023-01-31 23:04:58 +0000 UTC FieldsV1 {"f:status":{"f:addresses":{"k:{\"type\":\"ExternalDNS\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"ExternalIP\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"Hostname\"}":{"f:address":{}},"k:{\"type\":\"InternalDNS\"}":{".":{},"f:address":{},"f:type":{}}}}} status} {kops-controller Update v1 2023-01-31 23:04:58 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:node-role.kubernetes.io/node":{}}}} } {kube-controller-manager Update v1 2023-01-31 23:05:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.4.0/24\"":{}}}} } {kube-controller-manager Update v1 2023-01-31 23:13:19 +0000 UTC FieldsV1 {"f:status":{"f:volumesAttached":{}}} status} {kubelet Update v1 2023-01-31 23:13:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.4.0/24,DoNotUseExternalID:,ProviderID:aws:///ap-south-1a/i-0b2bb17140492f5c5,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{49756430336 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},hugepages-32Mi: {{0 0} {<nil>} 0 DecimalSI},hugepages-64Ki: {{0 0} {<nil>} 0 DecimalSI},memory: {{4027248640 0} {<nil>} 3932860Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{44780787229 0} {<nil>} 44780787229 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},hugepages-32Mi: {{0 0} {<nil>} 0 DecimalSI},hugepages-64Ki: {{0 0} {<nil>} 0 DecimalSI},memory: {{3922391040 0} {<nil>} 3830460Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-31 23:13:29 +0000 UTC,LastTransitionTime:2023-01-31 23:04:57 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-31 23:13:29 +0000 UTC,LastTransitionTime:2023-01-31 23:04:57 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-31 23:13:29 +0000 UTC,LastTransitionTime:2023-01-31 23:04:57 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-31 23:13:29 +0000 UTC,LastTransitionTime:2023-01-31 23:05:11 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.50.26,},NodeAddress{Type:ExternalIP,Address:52.66.200.119,},NodeAddress{Type:InternalDNS,Address:i-0b2bb17140492f5c5.ap-south-1.compute.internal,},NodeAddress{Type:Hostname,Address:i-0b2bb17140492f5c5.ap-south-1.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-52-66-200-119.ap-south-1.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec2fa46281a063ad46152d1efe136f1e,SystemUUID:ec2fa462-81a0-63ad-4615-2d1efe136f1e,BootID:3c4282db-f30e-427a-b2ce-67c1f32364c4,KernelVersion:5.15.0-1028-aws,OSImage:Ubuntu 22.04.1 LTS,ContainerRuntimeVersion:containerd://1.6.16,KubeletVersion:v1.27.0-alpha.1.161+c4ebbeeb747cd3,KubeProxyVersion:v1.27.0-alpha.1.161+c4ebbeeb747cd3,OperatingSystem:linux,Architecture:arm64,},Images:[]ContainerImage{ContainerImage{Names:[quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5 quay.io/cilium/cilium:v1.12.5],SizeBytes:157636062,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:116764683,},ContainerImage{Names:[registry.k8s.io/kube-proxy-arm64:v1.27.0-alpha.1.161_c4ebbeeb747cd3],SizeBytes:63181512,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:49808364,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40343601,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:f0c5de192d832e7c1daa6580d4a62e8fa6fc8eabc0917ae4cb7ed4d15e95b59e registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.1],SizeBytes:28932904,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:23945474,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:22678632,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:22434180,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:22394501,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf registry.k8s.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:21331383,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nonroot@sha256:ee9f50b3c64b174d296d91ca9f69a914ac30e59095dfb462b2b518ad28a63655 registry.k8s.io/e2e-test-images/nonroot:1.4],SizeBytes:18561972,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18267611,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:13766234,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:8502345,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8223819,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:7685305,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:db048754ae68ae337d8fa96494c96d2a1204c3320f5dcf7e8e71085adec85da6 registry.k8s.io/e2e-test-images/nginx:1.15-4],SizeBytes:6883347,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:5c99cf6a02adda929b10321dbf4ecfa00d87be9ba4fb456006237d530ab4baa1 registry.k8s.io/e2e-test-images/nginx:1.14-4],SizeBytes:6860089,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:773367,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:772933,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:268051,},ContainerImage{Names:[registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause:3.6],SizeBytes:253553,},},VolumesInUse:[kubernetes.io/csi/csi-mock-csi-mock-volumes-workload-5163^b915528a-a1bc-11ed-a195-76601068a9b9],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-mock-csi-mock-volumes-workload-5163^b915528a-a1bc-11ed-a195-76601068a9b9,DevicePath:,},},Config:nil,},} Jan 31 23:13:39.831: INFO: Logging kubelet events for node i-0b2bb17140492f5c5 Jan 31 23:13:40.076: INFO: Logging pods the kubelet thinks is on node i-0b2bb17140492f5c5 Jan 31 23:13:40.323: INFO: ebs-csi-node-sdgsw started at 2023-01-31 23:04:58 +0000 UTC (0+3 container statuses recorded) Jan 31 23:13:40.323: INFO: Container ebs-plugin ready: true, restart count 0 Jan 31 23:13:40.323: INFO: Container liveness-probe ready: true, restart count 0 Jan 31 23:13:40.323: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 31 23:13:40.323: INFO: hostexec-i-0b2bb17140492f5c5-vtvxw started at 2023-01-31 23:13:17 +0000 UTC (0+1 container statuses recorded) Jan 31 23:13:40.323: INFO: Container agnhost-container ready: true, restart count 0 Jan 31 23:13:40.323: INFO: netserver-1 started at 2023-01-31 23:13:36 +0000 UTC (0+1 container statuses recorded) Jan 31 23:13:40.323: INFO: Container webserver ready: false, restart count 0 Jan 31 23:13:40.323: INFO: pod-subpath-test-preprovisionedpv-hj7r started at 2023-01-31 23:13:39 +0000 UTC (0+1 container statuses recorded) Jan 31 23:13:40.323: INFO: Container test-container-subpath-preprovisionedpv-hj7r ready: false, restart count 0 Jan 31 23:13:40.323: INFO: csi-mockplugin-0 started at 2023-01-31 23:13:20 +0000 UTC (0+4 container statuses recorded) Jan 31 23:13:40.323: INFO: Container busybox ready: true, restart count 0 Jan 31 23:13:40.323: INFO: Container csi-provisioner ready: true, restart count 0 Jan 31 23:13:40.323: INFO: Container driver-registrar ready: true, restart count 0 Jan 31 23:13:40.323: INFO: Container mock ready: true, restart count 0 Jan 31 23:13:40.323: INFO: csi-mockplugin-0 started at 2023-01-31 23:12:18 +0000 UTC (0+3 container statuses recorded) Jan 31 23:13:40.323: INFO: Container csi-provisioner ready: true, restart count 0 Jan 31 23:13:40.323: INFO: Container driver-registrar ready: true, restart count 0 Jan 31 23:13:40.323: INFO: Container mock ready: true, restart count 0 Jan 31 23:13:40.323: INFO: csi-mockplugin-attacher-0 started at 2023-01-31 23:12:18 +0000 UTC (0+1 container statuses recorded) Jan 31 23:13:40.323: INFO: Container csi-attacher ready: true, restart count 0 Jan 31 23:13:40.323: INFO: webserver-6bb8846c6d-9qlt5 started at 2023-01-31 23:13:30 +0000 UTC (0+1 container statuses recorded) Jan 31 23:13:40.323: INFO: Container httpd ready: true, restart count 0 Jan 31 23:13:40.323: INFO: pvc-volume-tester-jvn9s started at 2023-01-31 23:12:25 +0000 UTC (0+1 container statuses recorded) Jan 31 23:13:40.323: INFO: Container volume-tester ready: true, restart count 0 Jan 31 23:13:40.323: INFO: cilium-ntn2p started at 2023-01-31 23:04:58 +0000 UTC (1+1 container statuses recorded) Jan 31 23:13:40.323: INFO: Init container clean-cilium-state ready: true, restart count 0 Jan 31 23:13:40.323: INFO: Container cilium-agent ready: true, restart count 0 Jan 31 23:13:40.323: INFO: test-runtimeclass-runtimeclass-3304-non-conflict-runtimeclk58r4 started at 2023-01-31 23:08:31 +0000 UTC (0+1 container statuses recorded) Jan 31 23:13:40.323: INFO: Container test ready: false, restart count 0 Jan 31 23:13:40.323: INFO: csi-mockplugin-attacher-0 started at 2023-01-31 23:13:20 +0000 UTC (0+1 container statuses recorded) Jan 31 23:13:40.323: INFO: Container csi-attacher ready: true, restart count 0 Jan 31 23:13:40.323: INFO: hostexec-i-0b2bb17140492f5c5-jl9w2 started at 2023-01-31 23:13:40 +0000 UTC (0+1 container statuses recorded) Jan 31 23:13:40.323: INFO: Container agnhost-container ready: false, restart count 0 Jan 31 23:13:40.323: INFO: coredns-559769c974-7zfxc started at 2023-01-31 23:05:18 +0000 UTC (0+1 container statuses recorded) Jan 31 23:13:40.323: INFO: Container coredns ready: true, restart count 0 Jan 31 23:13:41.143: INFO: Latency metrics for node i-0b2bb17140492f5c5 Jan 31 23:13:41.143: INFO: Logging node info for node i-0f27661869b93b9cf Jan 31 23:13:41.383: INFO: Node Info: &Node{ObjectMeta:{i-0f27661869b93b9cf 5f59340c-df0c-453d-b0fd-7d5f6e92208a 15526 0 2023-01-31 23:04:33 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:arm64 beta.kubernetes.io/instance-type:t4g.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:ap-south-1 failure-domain.beta.kubernetes.io/zone:ap-south-1a kubernetes.io/arch:arm64 kubernetes.io/hostname:i-0f27661869b93b9cf kubernetes.io/os:linux node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t4g.medium topology.ebs.csi.aws.com/zone:ap-south-1a topology.hostpath.csi/node:i-0f27661869b93b9cf topology.kubernetes.io/region:ap-south-1 topology.kubernetes.io/zone:ap-south-1a] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-attach-9709":"i-0f27661869b93b9cf","ebs.csi.aws.com":"i-0f27661869b93b9cf"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{aws-cloud-controller-manager Update v1 2023-01-31 23:04:33 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:beta.kubernetes.io/instance-type":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {aws-cloud-controller-manager Update v1 2023-01-31 23:04:33 +0000 UTC FieldsV1 {"f:status":{"f:addresses":{"k:{\"type\":\"ExternalDNS\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"ExternalIP\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"Hostname\"}":{"f:address":{}},"k:{\"type\":\"InternalDNS\"}":{".":{},"f:address":{},"f:type":{}}}}} status} {kops-controller Update v1 2023-01-31 23:04:33 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:node-role.kubernetes.io/node":{}}}} } {kubelet Update v1 2023-01-31 23:04:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}}} } {kube-controller-manager Update v1 2023-01-31 23:04:52 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.1.0/24\"":{}}}} } {kubelet Update v1 2023-01-31 23:13:34 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.1.0/24,DoNotUseExternalID:,ProviderID:aws:///ap-south-1a/i-0f27661869b93b9cf,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{49756430336 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},hugepages-32Mi: {{0 0} {<nil>} 0 DecimalSI},hugepages-64Ki: {{0 0} {<nil>} 0 DecimalSI},memory: {{4027252736 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{44780787229 0} {<nil>} 44780787229 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},hugepages-32Mi: {{0 0} {<nil>} 0 DecimalSI},hugepages-64Ki: {{0 0} {<nil>} 0 DecimalSI},memory: {{3922395136 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-31 23:13:34 +0000 UTC,LastTransitionTime:2023-01-31 23:04:33 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-31 23:13:34 +0000 UTC,LastTransitionTime:2023-01-31 23:04:33 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-31 23:13:34 +0000 UTC,LastTransitionTime:2023-01-31 23:04:33 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-31 23:13:34 +0000 UTC,LastTransitionTime:2023-01-31 23:04:52 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.61.60,},NodeAddress{Type:ExternalIP,Address:13.235.94.235,},NodeAddress{Type:InternalDNS,Address:i-0f27661869b93b9cf.ap-south-1.compute.internal,},NodeAddress{Type:Hostname,Address:i-0f27661869b93b9cf.ap-south-1.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-13-235-94-235.ap-south-1.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec2c70c48641b5fea969fcf83efc8316,SystemUUID:ec2c70c4-8641-b5fe-a969-fcf83efc8316,BootID:26562ed8-b279-49ee-b7ff-83448bf31531,KernelVersion:5.15.0-1028-aws,OSImage:Ubuntu 22.04.1 LTS,ContainerRuntimeVersion:containerd://1.6.16,KubeletVersion:v1.27.0-alpha.1.161+c4ebbeeb747cd3,KubeProxyVersion:v1.27.0-alpha.1.161+c4ebbeeb747cd3,OperatingSystem:linux,Architecture:arm64,},Images:[]ContainerImage{ContainerImage{Names:[quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5 quay.io/cilium/cilium:v1.12.5],SizeBytes:157636062,},ContainerImage{Names:[registry.k8s.io/kube-proxy-arm64:v1.27.0-alpha.1.161_c4ebbeeb747cd3],SizeBytes:63181512,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:49808364,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:3fe7acf013d1264ffded116b80a73dc129a449b0fccdb8d21af8279f2233f36e registry.k8s.io/e2e-test-images/httpd:2.4.39-4],SizeBytes:41528818,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40343601,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:f0c5de192d832e7c1daa6580d4a62e8fa6fc8eabc0917ae4cb7ed4d15e95b59e registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.1],SizeBytes:28932904,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:23945474,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:22678632,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:22434180,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:22394501,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:122bfb8c1edabb3c0edd63f06523e6940d958d19b3957dc7b1d6f81e9f1f6119 registry.k8s.io/sig-storage/csi-provisioner:v3.1.0],SizeBytes:21735644,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:9ebbf9f023e7b41ccee3d52afe39a89e3ddacdbb69269d583abfc25847cfd9e4 registry.k8s.io/sig-storage/csi-resizer:v1.4.0],SizeBytes:20848413,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:8b9c313c05f54fb04f8d430896f5f5904b6cb157df261501b29adc04d2b2dc7b registry.k8s.io/sig-storage/csi-attacher:v3.4.0],SizeBytes:20565480,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:68d396900aeaa072c1f27289485fdac29834045a6f3ffe369bf389d830ef572d registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.6],SizeBytes:19007623,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18267611,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:13766234,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:8502345,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8223819,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:7685305,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:5c99cf6a02adda929b10321dbf4ecfa00d87be9ba4fb456006237d530ab4baa1 registry.k8s.io/e2e-test-images/nginx:1.14-4],SizeBytes:6860089,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:772933,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:268051,},ContainerImage{Names:[registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause:3.6],SizeBytes:253553,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 31 23:13:41.384: INFO: Logging kubelet events for node i-0f27661869b93b9cf Jan 31 23:13:41.628: INFO: Logging pods the kubelet thinks is on node i-0f27661869b93b9cf Jan 31 23:13:41.887: INFO: coredns-autoscaler-7cb5c5b969-zbdt5 started at 2023-01-31 23:04:52 +0000 UTC (0+1 container statuses recorded) Jan 31 23:13:41.887: INFO: Container autoscaler ready: true, restart count 0 Jan 31 23:13:41.887: INFO: ebs-csi-controller-555658678f-wvzsz started at 2023-01-31 23:04:52 +0000 UTC (0+5 container statuses recorded) Jan 31 23:13:41.887: INFO: Container csi-attacher ready: true, restart count 0 Jan 31 23:13:41.887: INFO: Container csi-provisioner ready: true, restart count 0 Jan 31 23:13:41.887: INFO: Container csi-resizer ready: true, restart count 0 Jan 31 23:13:41.887: INFO: Container ebs-plugin ready: true, restart count 0 Jan 31 23:13:41.887: INFO: Container liveness-probe ready: true, restart count 0 Jan 31 23:13:41.887: INFO: netserver-2 started at 2023-01-31 23:13:36 +0000 UTC (0+1 container statuses recorded) Jan 31 23:13:41.887: INFO: Container webserver ready: false, restart count 0 Jan 31 23:13:41.887: INFO: hostexec-i-0f27661869b93b9cf-7rv77 started at 2023-01-31 23:13:34 +0000 UTC (0+1 container statuses recorded) Jan 31 23:13:41.887: INFO: Container agnhost-container ready: true, restart count 0 Jan 31 23:13:41.887: INFO: pod-terminate-status-2-8 started at 2023-01-31 23:13:36 +0000 UTC (0+1 container statuses recorded) Jan 31 23:13:41.887: INFO: Container fail ready: false, restart count 0 Jan 31 23:13:41.887: INFO: test-pod-b563b698-d50f-431c-a2ed-ce3982c214c5 started at 2023-01-31 23:13:36 +0000 UTC (0+1 container statuses recorded) Jan 31 23:13:41.887: INFO: Container agnhost-container ready: false, restart count 0 Jan 31 23:13:41.887: INFO: ss2-2 started at <nil> (0+0 container statuses recorded) Jan 31 23:13:41.887: INFO: webserver-6bb8846c6d-v4n5v started at 2023-01-31 23:13:30 +0000 UTC (0+1 container statuses recorded) Jan 31 23:13:41.887: INFO: Container httpd ready: true, restart count 0 Jan 31 23:13:41.887: INFO: exceed-active-deadline-fxcj9 started at 2023-01-31 23:13:20 +0000 UTC (0+1 container statuses recorded) Jan 31 23:13:41.887: INFO: Container c ready: true, restart count 0 Jan 31 23:13:41.887: INFO: pod-terminate-status-1-6 started at 2023-01-31 23:13:35 +0000 UTC (0+1 container statuses recorded) Jan 31 23:13:41.887: INFO: Container fail ready: false, restart count 0 Jan 31 23:13:41.887: INFO: csi-mockplugin-0 started at 2023-01-31 23:13:16 +0000 UTC (0+3 container statuses recorded) Jan 31 23:13:41.887: INFO: Container csi-provisioner ready: true, restart count 0 Jan 31 23:13:41.887: INFO: Container driver-registrar ready: true, restart count 0 Jan 31 23:13:41.887: INFO: Container mock ready: true, restart count 0 Jan 31 23:13:41.887: INFO: cilium-7ht4d started at 2023-01-31 23:04:34 +0000 UTC (1+1 container statuses recorded) Jan 31 23:13:41.887: INFO: Init container clean-cilium-state ready: true, restart count 0 Jan 31 23:13:41.887: INFO: Container cilium-agent ready: true, restart count 0 Jan 31 23:13:41.887: INFO: ebs-csi-node-t8vbr started at 2023-01-31 23:04:34 +0000 UTC (0+3 container statuses recorded) Jan 31 23:13:41.887: INFO: Container ebs-plugin ready: true, restart count 0 Jan 31 23:13:41.887: INFO: Container liveness-probe ready: true, restart count 0 Jan 31 23:13:41.887: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 31 23:13:41.887: INFO: coredns-559769c974-dl7c9 started at 2023-01-31 23:04:52 +0000 UTC (0+1 container statuses recorded) Jan 31 23:13:41.887: INFO: Container coredns ready: true, restart count 0 Jan 31 23:13:43.071: INFO: Latency metrics for node i-0f27661869b93b9cf Jan 31 23:13:43.071: INFO: Logging node info for node i-0f91383546e2b25fb Jan 31 23:13:43.311: INFO: Node Info: &Node{ObjectMeta:{i-0f91383546e2b25fb a0707f7c-2453-494d-b014-b616afd38869 15252 0 2023-01-31 23:04:37 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:arm64 beta.kubernetes.io/instance-type:t4g.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:ap-south-1 failure-domain.beta.kubernetes.io/zone:ap-south-1a kubernetes.io/arch:arm64 kubernetes.io/hostname:i-0f91383546e2b25fb kubernetes.io/os:linux node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t4g.medium topology.ebs.csi.aws.com/zone:ap-south-1a topology.hostpath.csi/node:i-0f91383546e2b25fb topology.kubernetes.io/region:ap-south-1 topology.kubernetes.io/zone:ap-south-1a] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-volume-expand-9151":"i-0f91383546e2b25fb","ebs.csi.aws.com":"i-0f91383546e2b25fb"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{aws-cloud-controller-manager Update v1 2023-01-31 23:04:37 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:beta.kubernetes.io/instance-type":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {aws-cloud-controller-manager Update v1 2023-01-31 23:04:37 +0000 UTC FieldsV1 {"f:status":{"f:addresses":{"k:{\"type\":\"ExternalDNS\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"ExternalIP\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"Hostname\"}":{"f:address":{}},"k:{\"type\":\"InternalDNS\"}":{".":{},"f:address":{},"f:type":{}}}}} status} {kops-controller Update v1 2023-01-31 23:04:37 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:node-role.kubernetes.io/node":{}}}} } {kube-controller-manager Update v1 2023-01-31 23:04:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.2.0/24\"":{}}}} } {kubelet Update v1 2023-01-31 23:04:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}}} } {kube-controller-manager Update v1 2023-01-31 23:13:19 +0000 UTC FieldsV1 {"f:status":{"f:volumesAttached":{}}} status} {kubelet Update v1 2023-01-31 23:13:27 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.2.0/24,DoNotUseExternalID:,ProviderID:aws:///ap-south-1a/i-0f91383546e2b25fb,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{49756430336 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},hugepages-32Mi: {{0 0} {<nil>} 0 DecimalSI},hugepages-64Ki: {{0 0} {<nil>} 0 DecimalSI},memory: {{4027252736 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{44780787229 0} {<nil>} 44780787229 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},hugepages-32Mi: {{0 0} {<nil>} 0 DecimalSI},hugepages-64Ki: {{0 0} {<nil>} 0 DecimalSI},memory: {{3922395136 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-31 23:13:27 +0000 UTC,LastTransitionTime:2023-01-31 23:04:37 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-31 23:13:27 +0000 UTC,LastTransitionTime:2023-01-31 23:04:37 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-31 23:13:27 +0000 UTC,LastTransitionTime:2023-01-31 23:04:37 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-31 23:13:27 +0000 UTC,LastTransitionTime:2023-01-31 23:04:52 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.56.88,},NodeAddress{Type:ExternalIP,Address:13.233.31.6,},NodeAddress{Type:InternalDNS,Address:i-0f91383546e2b25fb.ap-south-1.compute.internal,},NodeAddress{Type:Hostname,Address:i-0f91383546e2b25fb.ap-south-1.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-13-233-31-6.ap-south-1.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec2c500fcdf67f460ecaa4185c5bed0c,SystemUUID:ec2c500f-cdf6-7f46-0eca-a4185c5bed0c,BootID:4c8c5b6a-893f-4392-9cae-92a9be10c6a4,KernelVersion:5.15.0-1028-aws,OSImage:Ubuntu 22.04.1 LTS,ContainerRuntimeVersion:containerd://1.6.16,KubeletVersion:v1.27.0-alpha.1.161+c4ebbeeb747cd3,KubeProxyVersion:v1.27.0-alpha.1.161+c4ebbeeb747cd3,OperatingSystem:linux,Architecture:arm64,},Images:[]ContainerImage{ContainerImage{Names:[quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5 quay.io/cilium/cilium:v1.12.5],SizeBytes:157636062,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:116764683,},ContainerImage{Names:[registry.k8s.io/kube-proxy-arm64:v1.27.0-alpha.1.161_c4ebbeeb747cd3],SizeBytes:63181512,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:49808364,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:3fe7acf013d1264ffded116b80a73dc129a449b0fccdb8d21af8279f2233f36e registry.k8s.io/e2e-test-images/httpd:2.4.39-4],SizeBytes:41528818,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40343601,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:f0c5de192d832e7c1daa6580d4a62e8fa6fc8eabc0917ae4cb7ed4d15e95b59e registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.1],SizeBytes:28932904,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:23945474,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:22678632,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:22434180,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:22394501,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:122bfb8c1edabb3c0edd63f06523e6940d958d19b3957dc7b1d6f81e9f1f6119 registry.k8s.io/sig-storage/csi-provisioner:v3.1.0],SizeBytes:21735644,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:9ebbf9f023e7b41ccee3d52afe39a89e3ddacdbb69269d583abfc25847cfd9e4 registry.k8s.io/sig-storage/csi-resizer:v1.4.0],SizeBytes:20848413,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:8b9c313c05f54fb04f8d430896f5f5904b6cb157df261501b29adc04d2b2dc7b registry.k8s.io/sig-storage/csi-attacher:v3.4.0],SizeBytes:20565480,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18267611,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:8502345,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8223819,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:7685305,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:db048754ae68ae337d8fa96494c96d2a1204c3320f5dcf7e8e71085adec85da6 registry.k8s.io/e2e-test-images/nginx:1.15-4],SizeBytes:6883347,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:5c99cf6a02adda929b10321dbf4ecfa00d87be9ba4fb456006237d530ab4baa1 registry.k8s.io/e2e-test-images/nginx:1.14-4],SizeBytes:6860089,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:773367,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:772933,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:268051,},ContainerImage{Names:[registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause:3.6],SizeBytes:253553,},},VolumesInUse:[kubernetes.io/csi/csi-hostpath-volume-expand-9151^c50606d2-a1bc-11ed-ad88-b69d169b91c6],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-volume-expand-9151^c50606d2-a1bc-11ed-ad88-b69d169b91c6,DevicePath:,},},Config:nil,},} Jan 31 23:13:43.311: INFO: Logging kubelet events for node i-0f91383546e2b25fb Jan 31 23:13:43.554: INFO: Logging pods the kubelet thinks is on node i-0f91383546e2b25fb Jan 31 23:13:43.803: INFO: funny-ip-8xtd6 started at 2023-01-31 23:12:15 +0000 UTC (0+1 container statuses recorded) Jan 31 23:13:43.803: INFO: Container funny-ip ready: true, restart count 0 Jan 31 23:13:43.803: INFO: inline-volume-7v26h started at 2023-01-31 23:12:56 +0000 UTC (0+1 container statuses recorded) Jan 31 23:13:43.803: INFO: Container volume-tester ready: false, restart count 0 Jan 31 23:13:43.803: INFO: test-host-network-pod started at 2023-01-31 23:13:33 +0000 UTC (0+2 container statuses recorded) Jan 31 23:13:43.803: INFO: Container busybox-1 ready: true, restart count 0 Jan 31 23:13:43.803: INFO: Container busybox-2 ready: true, restart count 0 Jan 31 23:13:43.803: INFO: cilium-98pb9 started at 2023-01-31 23:04:37 +0000 UTC (1+1 container statuses recorded) Jan 31 23:13:43.803: INFO: Init container clean-cilium-state ready: true, restart count 0 Jan 31 23:13:43.803: INFO: Container cilium-agent ready: true, restart count 0 Jan 31 23:13:43.803: INFO: termination-message-containerf1578323-b016-4cc2-97f1-2ee84b30eb0b started at 2023-01-31 23:13:41 +0000 UTC (0+1 container statuses recorded) Jan 31 23:13:43.803: INFO: Container termination-message-container ready: false, restart count 0 Jan 31 23:13:43.803: INFO: netserver-3 started at 2023-01-31 23:13:36 +0000 UTC (0+1 container statuses recorded) Jan 31 23:13:43.803: INFO: Container webserver ready: false, restart count 0 Jan 31 23:13:43.803: INFO: ebs-csi-node-t87dv started at 2023-01-31 23:04:37 +0000 UTC (0+3 container statuses recorded) Jan 31 23:13:43.803: INFO: Container ebs-plugin ready: true, restart count 0 Jan 31 23:13:43.803: INFO: Container liveness-probe ready: true, restart count 0 Jan 31 23:13:43.803: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 31 23:13:43.803: INFO: execpod985tf started at 2023-01-31 23:12:18 +0000 UTC (0+1 container statuses recorded) Jan 31 23:13:43.803: INFO: Container agnhost-container ready: true, restart count 0 Jan 31 23:13:43.803: INFO: ss2-1 started at 2023-01-31 23:13:31 +0000 UTC (0+1 container statuses recorded) Jan 31 23:13:43.803: INFO: Container webserver ready: true, restart count 0 Jan 31 23:13:43.803: INFO: csi-hostpathplugin-0 started at 2023-01-31 23:12:39 +0000 UTC (0+7 container statuses recorded) Jan 31 23:13:43.803: INFO: Container csi-attacher ready: true, restart count 0 Jan 31 23:13:43.803: INFO: Container csi-provisioner ready: true, restart count 0 Jan 31 23:13:43.803: INFO: Container csi-resizer ready: true, restart count 0 Jan 31 23:13:43.803: INFO: Container csi-snapshotter ready: true, restart count 0 Jan 31 23:13:43.803: INFO: Container hostpath ready: true, restart count 0 Jan 31 23:13:43.803: INFO: Container liveness-probe ready: true, restart count 0 Jan 31 23:13:43.803: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 31 23:13:43.803: INFO: exceed-active-deadline-j7fsr started at 2023-01-31 23:13:20 +0000 UTC (0+1 container statuses recorded) Jan 31 23:13:43.803: INFO: Container c ready: true, restart count 0 Jan 31 23:13:43.803: INFO: test-ss-0 started at 2023-01-31 23:13:25 +0000 UTC (0+1 container statuses recorded) Jan 31 23:13:43.803: INFO: Container webserver ready: true, restart count 0 Jan 31 23:13:43.803: INFO: ebs-csi-controller-555658678f-z2p72 started at 2023-01-31 23:04:54 +0000 UTC (0+5 container statuses recorded) Jan 31 23:13:43.803: INFO: Container csi-attacher ready: true, restart count 0 Jan 31 23:13:43.803: INFO: Container csi-provisioner ready: true, restart count 0 Jan 31 23:13:43.803: INFO: Container csi-resizer ready: true, restart count 0 Jan 31 23:13:43.803: INFO: Container ebs-plugin ready: true, restart count 0 Jan 31 23:13:43.803: INFO: Container liveness-probe ready: true, restart count 0 Jan 31 23:13:45.393: INFO: Latency metrics for node i-0f91383546e2b25fb END STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/31/23 23:13:45.393 (10.602s) < Exit [DeferCleanup (Each)] [sig-node] RuntimeClass - dump namespaces | framework.go:206 @ 01/31/23 23:13:45.393 (10.602s) > Enter [DeferCleanup (Each)] [sig-node] RuntimeClass - tear down framework | framework.go:203 @ 01/31/23 23:13:45.393 STEP: Destroying namespace "runtimeclass-3304" for this suite. - test/e2e/framework/framework.go:347 @ 01/31/23 23:13:45.393 < Exit [DeferCleanup (Each)] [sig-node] RuntimeClass - tear down framework | framework.go:203 @ 01/31/23 23:13:45.634 (242ms) > Enter [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/31/23 23:13:45.634 < Exit [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/31/23 23:13:45.635 (0s)
Find runtimeclass-3304/test-runtimeclass-runtimeclass-3304-non-conflict-runtimeclk58r4 mentions in log files | View test history on testgrid
exit status 255
from junit_runner.xml
Filter through log files | View test history on testgrid
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)] volumes should store data
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with different fsgroup applied to the volume contents
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with same fsgroup applied to the volume contents
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with different fsgroup applied to the volume contents
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with same fsgroup skips ownership changes to the volume contents
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with mount options
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] volumes should store data
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (delayed binding)] topology should provision a volume and schedule a pod with AllowedTopologies
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ext4)] volumes should store data
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits should verify that all csinodes have volume limits
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (immediate binding)] topology should provision a volume and schedule a pod with AllowedTopologies
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should create read/write inline ephemeral volume
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should support expansion of pvcs created for ephemeral pvcs
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should support multiple inline ephemeral volumes
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should support two pods which have the same volume definition
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should create read-only inline ephemeral volume
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should create read/write inline ephemeral volume
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support expansion of pvcs created for ephemeral pvcs
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support multiple inline ephemeral volumes
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support two pods which have the same volume definition
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Generic Ephemeral-volume (default fs)] volumeLimits should verify that all csinodes have volume limits
Kubernetes e2e suite [It] [sig-api-machinery] API priority and fairness should ensure that requests can be classified by adding FlowSchema and PriorityLevelConfiguration
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceValidationRules [Privileged:ClusterAdmin] MUST NOT fail validation for create of a custom resource that satisfies the x-kubernetes-validations rules
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceValidationRules [Privileged:ClusterAdmin] MUST fail create of a custom resource definition that contains a x-kubernetes-validations rule that refers to a property that do not exist
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceValidationRules [Privileged:ClusterAdmin] MUST fail create of a custom resource definition that contains an x-kubernetes-validations rule that contains a syntax error
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceValidationRules [Privileged:ClusterAdmin] MUST fail create of a custom resource definition that contains an x-kubernetes-validations rule that exceeds the estimated cost limit
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceValidationRules [Privileged:ClusterAdmin] MUST fail create of a custom resource that exceeds the runtime cost limit for x-kubernetes-validations rule execution
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceValidationRules [Privileged:ClusterAdmin] MUST fail update of a custom resource that does not satisfy a x-kubernetes-validations transition rule
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceValidationRules [Privileged:ClusterAdmin] MUST fail validation for create of a custom resource that does not satisfy the x-kubernetes-validations rules
Kubernetes e2e suite [It] [sig-api-machinery] Discovery Custom resource should have storage version hash
Kubernetes e2e suite [It] [sig-api-machinery] Discovery should accurately determine present and missing resources
Kubernetes e2e suite [It] [sig-api-machinery] Discovery should validate PreferredVersion for each APIGroup [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] FieldValidation should create/apply a CR with unknown fields for CRD with no validation schema
Kubernetes e2e suite [It] [sig-api-machinery] FieldValidation should create/apply a valid CR for CRD with validation schema
Kubernetes e2e suite [It] [sig-api-machinery] FieldValidation should create/apply an invalid CR with extra properties for CRD with validation schema
Kubernetes e2e suite [It] [sig-api-machinery] FieldValidation should detect duplicates in a CR when preserving unknown fields
Kubernetes e2e suite [It] [sig-api-machinery] FieldValidation should detect unknown and duplicate fields of a typed object
Kubernetes e2e suite [It] [sig-api-machinery] FieldValidation should detect unknown metadata fields in both the root and embedded object of a CR
Kubernetes e2e suite [It] [sig-api-machinery] FieldValidation should detect unknown metadata fields of a typed object
Kubernetes e2e suite [It] [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Garbage collector should delete jobs and pods created by cronjob
Kubernetes e2e suite [It] [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Garbage collector should orphan pods created by rc if deleteOptions.OrphanDependents is nil
Kubernetes e2e suite [It] [sig-api-machinery] Garbage collector should support cascading deletion of custom resources
Kubernetes e2e suite [It] [sig-api-machinery] Garbage collector should support orphan deletion of custom resources
Kubernetes e2e suite [It] [sig-api-machinery] Generated clientset should create pods, set the deletionTimestamp and deletionGracePeriodSeconds of the pod
Kubernetes e2e suite [It] [sig-api-machinery] Generated clientset should create v1 cronJobs, delete cronJobs, watch cronJobs
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should apply changes to a resourcequota status [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a custom resource.
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a persistent volume claim
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a persistent volume claim with a storage class
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should manage the lifecycle of a ResourceQuota [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should verify ResourceQuota with cross namespace pod affinity scope using scope-selectors.
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Server request timeout default timeout should be used if the specified timeout in the request URL is 0s
Kubernetes e2e suite [It] [sig-api-machinery] Server request timeout should return HTTP status code 400 if the user specifies an invalid timeout in the request URL
Kubernetes e2e suite [It] [sig-api-machinery] Server request timeout the request should be served with a default timeout if the specified timeout in the request URL exceeds maximum allowed
Kubernetes e2e suite [It] [sig-api-machinery] ServerSideApply should create an applied object if it does not already exist
Kubernetes e2e suite [It] [sig-api-machinery] ServerSideApply should give up ownership of a field if forced applied by a controller
Kubernetes e2e suite [It] [sig-api-machinery] ServerSideApply should ignore conflict errors if force apply is used
Kubernetes e2e suite [It] [sig-api-machinery] ServerSideApply should not remove a field if an owner unsets the field but other managers still have ownership of the field
Kubernetes e2e suite [It] [sig-api-machinery] ServerSideApply should remove a field if it is owned but removed in the apply request
Kubernetes e2e suite [It] [sig-api-machinery] ServerSideApply should work for CRDs
Kubernetes e2e suite [It] [sig-api-machinery] ServerSideApply should work for subresources
Kubernetes e2e suite [It] [sig-api-machinery] Servers with support for API chunking should return chunks of results for list calls
Kubernetes e2e suite [It] [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Servers with support for Table transformation should return chunks of table results for list calls
Kubernetes e2e suite [It] [sig-api-machinery] Servers with support for Table transformation should return generic metadata details across all namespaces for nodes
Kubernetes e2e suite [It] [sig-api-machinery] Servers with support for Table transformation should return pod details
Kubernetes e2e suite [It] [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] client-go should negotiate watch and report errors with accept "application/json"
Kubernetes e2e suite [It] [sig-api-machinery] client-go should negotiate watch and report errors with accept "application/json,application/vnd.kubernetes.protobuf"
Kubernetes e2e suite [It] [sig-api-machinery] client-go should negotiate watch and report errors with accept "application/vnd.kubernetes.protobuf"
Kubernetes e2e suite [It] [sig-api-machinery] client-go should negotiate watch and report errors with accept "application/vnd.kubernetes.protobuf,application/json"
Kubernetes e2e suite [It] [sig-api-machinery] health handlers should contain necessary checks
Kubernetes e2e suite [It] [sig-api-machinery] server version should find the server version [Conformance]
Kubernetes e2e suite [It] [sig-apps] CronJob should be able to schedule after more than 100 missed schedule
Kubernetes e2e suite [It] [sig-apps] CronJob should delete failed finished jobs with limit of one job
Kubernetes e2e suite [It] [sig-apps] CronJob should delete successful finished jobs with limit of one successful job
Kubernetes e2e suite [It] [sig-apps] CronJob should not emit unexpected warnings
Kubernetes e2e suite [It] [sig-apps] CronJob should remove from active list jobs that have been deleted
Kubernetes e2e suite [It] [sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]
Kubernetes e2e suite [It] [sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]
Kubernetes e2e suite [It] [sig-apps] CronJob should support CronJob API operations [Conformance]
Kubernetes e2e suite [It] [sig-apps] CronJob should support timezone
Kubernetes e2e suite [It] [sig-apps] Deployment Deployment should have a working scale subresource [Conformance]
Kubernetes e2e suite [It] [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]
Kubernetes e2e suite [It] [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]
Kubernetes e2e suite [It] [sig-apps] Deployment deployment reaping should cascade to its replica sets and pods
Kubernetes e2e suite [It] [sig-apps] Deployment deployment should delete old replica sets [Conformance]
Kubernetes e2e suite [It] [sig-apps] Deployment deployment should support proportional scaling [Conformance]
Kubernetes e2e suite [It] [sig-apps] Deployment deployment should support rollover [Conformance]
Kubernetes e2e suite [It] [sig-apps] Deployment iterative rollouts should eventually progress
Kubernetes e2e suite [It] [sig-apps] Deployment should not disrupt a cloud load-balancer's connectivity during rollout
Kubernetes e2e suite [It] [sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]
Kubernetes e2e suite [It] [sig-apps] Deployment should validate Deployment Status endpoints [Conformance]
Kubernetes e2e suite [It] [sig-apps] Deployment test Deployment ReplicaSet orphaning and adoption regarding controllerRef
Kubernetes e2e suite [It] [sig-apps] DisruptionController Listing PodDisruptionBudgets for all namespaces should list and delete a collection of PodDisruptionBudgets [Conformance]
Kubernetes e2e suite [It] [sig-apps] DisruptionController evictions: enough pods, absolute => should allow an eviction
Kubernetes e2e suite [It] [sig-apps] DisruptionController evictions: enough pods, replicaSet, percentage => should allow an eviction
Kubernetes e2e suite [It] [sig-apps] DisruptionController evictions: maxUnavailable allow single eviction, percentage => should allow an eviction
Kubernetes e2e suite [It] [sig-apps] DisruptionController evictions: no PDB => should allow an eviction
Kubernetes e2e suite [It] [sig-apps] DisruptionController evictions: too few pods, absolute => should not allow an eviction
Kubernetes e2e suite [It] [sig-apps] DisruptionController should block an eviction until the PDB is updated to allow it [Conformance]
Kubernetes e2e suite [It] [sig-apps] DisruptionController should create a PodDisruptionBudget [Conformance]
Kubernetes e2e suite [It] [sig-apps] DisruptionController should observe PodDisruptionBudget status updated [Conformance]
Kubernetes e2e suite [It] [sig-apps] DisruptionController should observe that the PodDisruptionBudget status is not updated for unmanaged pods
Kubernetes e2e suite [It] [sig-apps] DisruptionController should update/patch PodDisruptionBudget status [Conformance]
Kubernetes e2e suite [It] [sig-apps] Job Using a pod failure policy to not count some failures towards the backoffLimit Ignore DisruptionTarget condition
Kubernetes e2e suite [It] [sig-apps] Job Using a pod failure policy to not count some failures towards the backoffLimit Ignore exit code 137
Kubernetes e2e suite [It] [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]
Kubernetes e2e suite [It] [sig-apps] Job should allow to use the pod failure policy on exit code to fail the job early
Kubernetes e2e suite [It] [sig-apps] Job should allow to use the pod failure policy to not count the failure towards the backoffLimit
Kubernetes e2e suite [It] [sig-apps] Job should apply changes to a job status [Conformance]
Kubernetes e2e suite [It] [sig-apps] Job should create pods for an Indexed job with completion indexes and specified hostname [Conformance]
Kubernetes e2e suite [It] [sig-apps] Job should delete a job [Conformance]
Kubernetes e2e suite [It] [sig-apps] Job should delete pods when suspended
Kubernetes e2e suite [It] [sig-apps] Job should fail to exceed backoffLimit
Kubernetes e2e suite [It] [sig-apps] Job should fail when exceeds active deadline
Kubernetes e2e suite [It] [sig-apps] Job should manage the lifecycle of a job [Conformance]
Kubernetes e2e suite [It] [sig-apps] Job should not create pods when created in suspend state
Kubernetes e2e suite [It] [sig-apps] Job should remove pods when job is deleted
Kubernetes e2e suite [It] [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
Kubernetes e2e suite [It] [sig-apps] Job should run a job to completion when tasks sometimes fail and are not locally restarted
Kubernetes e2e suite [It] [sig-apps] Job should run a job to completion when tasks succeed
Kubernetes e2e suite [It] [sig-apps] ReplicaSet Replace and Patch tests [Conformance]
Kubernetes e2e suite [It] [sig-apps] ReplicaSet Replicaset should have a working scale subresource [Conformance]
Kubernetes e2e suite [It] [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]
Kubernetes e2e suite [It] [sig-apps] ReplicaSet should list and delete a collection of ReplicaSets [Conformance]
Kubernetes e2e suite [It] [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]
Kubernetes e2e suite [It] [sig-apps] ReplicaSet should surface a failure condition on a common issue like exceeded quota
Kubernetes e2e suite [It] [sig-apps] ReplicaSet should validate Replicaset Status endpoints [Conformance]
Kubernetes e2e suite [It] [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]
Kubernetes e2e suite [It] [sig-apps] ReplicationController should get and update a ReplicationController scale [Conformance]
Kubernetes e2e suite [It] [sig-apps] ReplicationController should release no longer matching pods [Conformance]
Kubernetes e2e suite [It] [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]
Kubernetes e2e suite [It] [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]
Kubernetes e2e suite [It] [sig-apps] ReplicationController should test the lifecycle of a ReplicationController [Conformance]
Kubernetes e2e suite [It] [sig-apps] StatefulSet AvailableReplicas should get updated accordingly when MinReadySeconds is enabled
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should adopt matching orphans and release non-matching pods
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should implement legacy replacement when the update strategy is OnDelete
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should list, patch and delete a collection of StatefulSets [Conformance]
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should not deadlock when a pod's predecessor fails
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications with PVCs
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should provide basic identity
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should validate Statefulset Status endpoints [Conformance]
Kubernetes e2e suite [It] [sig-apps] StatefulSet MinReadySeconds should be honored when enabled
Kubernetes e2e suite [It] [sig-apps] TTLAfterFinished job should be deleted once it finishes after TTL seconds
Kubernetes e2e suite [It] [sig-architecture] Conformance Tests should have at least two untainted nodes [Conformance]
Kubernetes e2e suite [It] [sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]
Kubernetes e2e suite [It] [sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR
Kubernetes e2e suite [It] [sig-auth] ServiceAccounts ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance]
Kubernetes e2e suite [It] [sig-auth] ServiceAccounts no secret-based service account token should be auto-generated
Kubernetes e2e suite [It] [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]
Kubernetes e2e suite [It] [sig-auth] ServiceAccounts should guarantee kube-root-ca.crt exist in any namespace [Conformance]
Kubernetes e2e suite [It] [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]
Kubernetes e2e suite [It] [sig-auth] ServiceAccounts should mount projected service account token [Conformance]
Kubernetes e2e suite [It] [sig-auth] ServiceAccounts should run through the lifecycle of a ServiceAccount [Conformance]
Kubernetes e2e suite [It] [sig-auth] ServiceAccounts should set ownership and permission when RunAsUser or FsGroup is present [LinuxOnly] [NodeFeature:FSGroup]
Kubernetes e2e suite [It] [sig-auth] ServiceAccounts should update a ServiceAccount [Conformance]
Kubernetes e2e suite [It] [sig-auth] SubjectReview should support SubjectReview API operations [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl Port forwarding With a server listening on 0.0.0.0 should support forwarding over websockets
Kubernetes e2e suite [It] [sig-cli] Kubectl Port forwarding With a server listening on 0.0.0.0 that expects NO client request should support a client that connects, sends DATA, and disconnects
Kubernetes e2e suite [It] [sig-cli] Kubectl Port forwarding With a server listening on 0.0.0.0 that expects a client request should support a client that connects, sends DATA, and disconnects
Kubernetes e2e suite [It] [sig-cli] Kubectl Port forwarding With a server listening on 0.0.0.0 that expects a client request should support a client that connects, sends NO DATA, and disconnects
Kubernetes e2e suite [It] [sig-cli] Kubectl Port forwarding With a server listening on localhost should support forwarding over websockets
Kubernetes e2e suite [It] [sig-cli] Kubectl Port forwarding With a server listening on localhost that expects NO client request should support a client that connects, sends DATA, and disconnects
Kubernetes e2e suite [It] [sig-cli] Kubectl Port forwarding With a server listening on localhost that expects a client request should support a client that connects, sends DATA, and disconnects
Kubernetes e2e suite [It] [sig-cli] Kubectl Port forwarding With a server listening on localhost that expects a client request should support a client that connects, sends NO DATA, and disconnects
Kubernetes e2e suite [It] [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl apply apply set/view last-applied
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl apply should apply a new configuration to an existing RC
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl apply should reuse port when apply to an existing SVC
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl cluster-info dump should check if cluster-info dump succeeds
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes control plane services is included in cluster-info [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl copy should copy a file from a running Pod
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl create quota should create a quota with scopes
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl create quota should create a quota without scopes
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl create quota should reject quota with invalid scopes
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for cronjob
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl events should show event when pod is created
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl get componentstatuses should get componentstatuses
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl validation should create/apply a CR with unknown fields for CRD with no validation schema
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl validation should create/apply a valid CR for CRD with validation schema
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl validation should create/apply an invalid/valid CR with arbitrary-extra properties for CRD with partially-specified validation schema
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl validation should detect unknown metadata fields in both the root and embedded object of a CR
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl validation should detect unknown metadata fields of a typed object
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Simple pod Kubectl run running a failing command
Kubernetes e2e suite [It] [sig-cli] Kubectl client Simple pod Kubectl run running a successful command
Kubernetes e2e suite [It] [sig-cli] Kubectl client Simple pod should contain last line of the log
Kubernetes e2e suite [It] [sig-cli] Kubectl client Simple pod should return command exit codes execing into a container with a failing command
Kubernetes e2e suite [It] [sig-cli] Kubectl client Simple pod should return command exit codes execing into a container with a successful command
Kubernetes e2e suite [It] [sig-cli] Kubectl client Simple pod should return command exit codes should support port-forward
Kubernetes e2e suite [It] [sig-cli] Kubectl client Simple pod should support exec
Kubernetes e2e suite [It] [sig-cli] Kubectl client Simple pod should support exec through an HTTP proxy
Kubernetes e2e suite [It] [sig-cli] Kubectl client Simple pod should support exec through kubectl proxy
Kubernetes e2e suite [It] [sig-cli] Kubectl client Simple pod should support exec using resource/name
Kubernetes e2e suite [It] [sig-cli] Kubectl client Simple pod should support inline execution and attach
Kubernetes e2e suite [It] [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client kubectl wait should ignore not found error with --for=delete
Kubernetes e2e suite [It] [sig-cli] Kubectl logs default container logs the second container is the default-container by annotation should log default container if not specified
Kubernetes e2e suite [It] [sig-cli] Kubectl logs logs should be able to retrieve and filter logs [Conformance]
Kubernetes e2e suite [It] [sig-instrumentation] Events API should delete a collection of events [Conformance]
Kubernetes e2e suite [It] [sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]
Kubernetes e2e suite [It] [sig-instrumentation] Events should delete a collection of events [Conformance]
Kubernetes e2e suite [It] [sig-instrumentation] Events should manage the lifecycle of an event [Conformance]
Kubernetes e2e suite [It] [sig-instrumentation] MetricsGrabber should grab all metrics from API server.
Kubernetes e2e suite [It] [sig-instrumentation] MetricsGrabber should grab all metrics from a ControllerManager.
Kubernetes e2e suite [It] [sig-instrumentation] MetricsGrabber should grab all metrics from a Kubelet.
Kubernetes e2e suite [It] [sig-instrumentation] MetricsGrabber should grab all metrics from a Scheduler.
Kubernetes e2e suite [It] [sig-instrumentation] MetricsGrabber should grab all metrics slis from API server.
Kubernetes e2e suite [It] [sig-network] CVE-2021-29923 IPv4 Service Type ClusterIP with leading zeros should work interpreted as decimal
Kubernetes e2e suite [It] [sig-network] Conntrack should be able to preserve UDP traffic when initial unready endpoints get ready
Kubernetes e2e suite [It] [sig-network] Conntrack should be able to preserve UDP traffic when server pod cycles for a ClusterIP service
Kubernetes e2e suite [It] [sig-network] Conntrack should be able to preserve UDP traffic when server pod cycles for a NodePort service
Kubernetes e2e suite [It] [sig-network] Conntrack should drop INVALID conntrack entries [Privileged]
Kubernetes e2e suite [It] [sig-network] DNS HostNetwork should resolve DNS of partial qualified names for services on hostNetwork pods with dnsPolicy: ClusterFirstWithHostNet [LinuxOnly]
Kubernetes e2e suite [It] [sig-network] DNS should provide /etc/hosts entries for the cluster [Conformance]
Kubernetes e2e suite [It] [sig-network] DNS should provide DNS for ExternalName services [Conformance]
Kubernetes e2e suite [It] [sig-network] DNS should provide DNS for pods for Hostname [Conformance]
Kubernetes e2e suite [It] [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]
Kubernetes e2e suite [It] [sig-network] DNS should provide DNS for services [Conformance]
Kubernetes e2e suite [It] [sig-network] DNS should provide DNS for the cluster [Conformance]
Kubernetes e2e suite [It] [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
Kubernetes e2e suite [It] [sig-network] DNS should resolve DNS of partial qualified names for the cluster [LinuxOnly]
Kubernetes e2e suite [It] [sig-network] DNS should support configurable pod DNS nameservers [Conformance]
Kubernetes e2e suite [It] [sig-network] DNS should support configurable pod resolv.conf
Kubernetes e2e suite [It] [sig-network] DNS should work with the pod containing more than 6 DNS search paths and longer than 256 search list characters
Kubernetes e2e suite [It] [sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]
Kubernetes e2e suite [It] [sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]
Kubernetes e2e suite [It] [sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]
Kubernetes e2e suite [It] [sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance]
Kubernetes e2e suite [It] [sig-network] EndpointSliceMirroring should mirror a custom Endpoint with multiple subsets and same IP address
Kubernetes e2e suite [It] [sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]
Kubernetes e2e suite [It] [sig-network] Ingress API should support creating Ingress API operations [Conformance]
Kubernetes e2e suite [It] [sig-network] IngressClass API should support creating IngressClass API operations [Conformance]
Kubernetes e2e suite [It] [sig-network] Netpol API should support creating NetworkPolicy API operations
Kubernetes e2e suite [It] [sig-network] Netpol API should support creating NetworkPolicy API with endport field
Kubernetes e2e suite [It] [sig-network] NetworkPolicy API should support creating NetworkPolicy API operations
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-network] Networking should provide unchanging, static URL paths for kubernetes api services
Kubernetes e2e suite [It] [sig-network] Proxy version v1 A set of valid responses are returned for both pod and service Proxy [Conformance]
Kubernetes e2e suite [It] [sig-network] Proxy version v1 A set of valid responses are returned for both pod and service ProxyWithPath [Conformance]
Kubernetes e2e suite [It] [sig-network] Proxy version v1 should proxy logs on node using proxy subresource
Kubernetes e2e suite [It] [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource
Kubernetes e2e suite [It] [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance]
Kubernetes e2e suite [It] [sig-network] Service endpoints latency should not be very high [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should allow pods to hairpin back to themselves through services
Kubernetes e2e suite [It] [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should be able to create a functioning NodePort service [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should be able to up and down services
Kubernetes e2e suite [It] [sig-network] Services should be updated after adding or deleting ports
Kubernetes e2e suite [It] [sig-network] Services should check NodePort out-of-range
Kubernetes e2e suite [It] [sig-network] Services should complete a service status lifecycle [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should delete a collection of services [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should find a service from listing all namespaces [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly]
Kubernetes e2e suite [It] [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly]
Kubernetes e2e suite [It] [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should implement service.kubernetes.io/headless
Kubernetes e2e suite [It] [sig-network] Services should implement service.kubernetes.io/service-proxy-name
Kubernetes e2e suite [It] [sig-network] Services should not be able to connect to terminating and unready endpoints if PublishNotReadyAddresses is false
Kubernetes e2e suite [It] [sig-network] Services should preserve source pod IP for traffic thru service cluster IP [LinuxOnly]
Kubernetes e2e suite [It] [sig-network] Services should prevent NodePort collisions
Kubernetes e2e suite [It] [sig-network] Services should provide secure master service [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should release NodePorts on delete
Kubernetes e2e suite [It] [sig-network] Services should serve a basic endpoint from pods [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should serve endpoints on same port and different protocol for internal traffic on Type LoadBalancer
Kubernetes e2e suite [It] [sig-network] Services should serve multiport endpoints from pods [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should test the lifecycle of an Endpoint [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should work after the service has been recreated
Kubernetes e2e suite [It] [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]
Kubernetes e2e suite [It] [sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]
Kubernetes e2e suite [It] [sig-node] ConfigMap should update ConfigMap successfully
Kubernetes e2e suite [It] [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart https hook properly [MinimumKubeletVersion:1.23] [NodeConformance]
Kubernetes e2e suite [It] [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop https hook properly [MinimumKubeletVersion:1.23] [NodeConformance]
Kubernetes e2e suite [It] [sig-node] Container Runtime blackbox test on terminated container should report termination message as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Container Runtime blackbox test on terminated container should report termination message from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Container Runtime blackbox test on terminated container should report termination message from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Container Runtime blackbox test on terminated container should report termination message if TerminationMessagePath is set [NodeConformance]
Kubernetes e2e suite [It] [sig-node] Container Runtime blackbox test on terminated container should report termination message if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Container Runtime blackbox test when running a container with a new image should be able to pull from private registry with secret [NodeConformance]
Kubernetes e2e suite [It] [sig-node] Container Runtime blackbox test when running a container with a new image should be able to pull image [NodeConformance]
Kubernetes e2e suite [It] [sig-node] Container Runtime blackbox test when running a container with a new image should not be able to pull from private registry without secret [NodeConformance]
Kubernetes e2e suite [It] [sig-node] Container Runtime blackbox test when running a container with a new image should not be able to pull image from invalid registry [NodeConformance]
Kubernetes e2e suite [It] [sig-node] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Containers should be able to override the image's default arguments (container cmd) [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Containers should be able to override the image's default command (container entrypoint) [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Downward API should provide host IP and pod IP as an env var if pod uses host network [LinuxOnly]
Kubernetes e2e suite [It] [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Ephemeral Containers [NodeConformance] will start an ephemeral container in an existing pod [Conformance]
Kubernetes e2e suite [It] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running
Kubernetes e2e suite [It] [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]
Kubernetes e2e suite [It] [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]
Kubernetes e2e suite [It] [sig-node] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
Kubernetes e2e suite [It] [sig-node] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
Kubernetes e2e suite [It] [sig-node] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Kubelet when scheduling an agnhost Pod with hostAliases should write entries to /etc/hosts [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Lease lease API should be available [Conformance]
Kubernetes e2e suite [It] [sig-node] Mount propagation should propagate mounts within defined scopes
Kubernetes e2e suite [It] [sig-node] NodeLease NodeLease should have OwnerReferences set
Kubernetes e2e suite [It] [sig-node] NodeLease NodeLease the kubelet should create and update a lease in the kube-node-lease namespace
Kubernetes e2e suite [It] [sig-node] NodeLease NodeLease the kubelet should report node status infrequently
Kubernetes e2e suite [It] [sig-node] PodOSRejection [NodeConformance] Kubelet should reject pod when the node OS doesn't match pod's OS
Kubernetes e2e suite [It] [sig-node] PodTemplates should delete a collection of pod templates [Conformance]
Kubernetes e2e suite [It] [sig-node] PodTemplates should replace a pod template [Conformance]
Kubernetes e2e suite [It] [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance]
Kubernetes e2e suite [It] [sig-node] Pods Extended Delete Grace Period should be submitted and removed
Kubernetes e2e suite [It] [sig-node] Pods Extended Pod Container Status should never report container start when an init container fails
Kubernetes e2e suite [It] [sig-node] Pods Extended Pod Container Status should never report success for a pending container
Kubernetes e2e suite [It] [sig-node] Pods Extended Pod Container lifecycle evicted pods should be terminal
Kubernetes e2e suite [It] [sig-node] Pods Extended Pod Container lifecycle should not create extra sandbox if all containers are done
Kubernetes e2e suite [It] [sig-node] Pods Extended Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]
Kubernetes e2e suite [It] [sig-node] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Pods should be submitted and removed [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Pods should be updated [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Pods should contain environment variables for services [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Pods should delete a collection of pods [Conformance]
Kubernetes e2e suite [It] [sig-node] Pods should get a host IP [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Pods should patch a pod status [Conformance]
Kubernetes e2e suite [It] [sig-node] Pods should run through the lifecycle of Pods and PodStatus [Conformance]
Kubernetes e2e suite [It] [sig-node] Pods should support pod readiness gates [NodeConformance]
Kubernetes e2e suite [It] [sig-node] Pods should support remote command execution over websockets [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] PreStop graceful pod terminated should wait until preStop hook completes the process
Kubernetes e2e suite [It] [sig-node] PreStop should call prestop when killing a pod [Conformance]
Kubernetes e2e suite [It] [sig-node] PrivilegedPod [NodeConformance] should enable privileged commands [LinuxOnly]
Kubernetes e2e suite [It] [sig-node] Probing container should *not* be restarted by liveness probe because startup probe delays it
Kubernetes e2e suite [It] [sig-node] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Probing container should *not* be restarted with a GRPC liveness probe [NodeConformance]
Kubernetes e2e suite [It] [sig-node] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Probing container should *not* be restarted with a non-local redirect http liveness probe
Kubernetes e2e suite [It] [sig-node] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Probing container should be ready immediately after startupProbe succeeds
Kubernetes e2e suite [It] [sig-node] Probing container should be restarted by liveness probe after startup probe enables it
Kubernetes e2e suite [It] [sig-node] Probing container should be restarted startup probe fails
Kubernetes e2e suite [It] [sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Probing container should be restarted with a GRPC liveness probe [NodeConformance]
Kubernetes e2e suite [It] [sig-node] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Probing container should be restarted with a failing exec liveness probe that took longer than the timeout
Kubernetes e2e suite [It] [sig-node] Probing container should be restarted with a local redirect http liveness probe
Kubernetes e2e suite [It] [sig-node] Probing container should be restarted with an exec liveness probe with timeout [MinimumKubeletVersion:1.20] [NodeConformance]
Kubernetes e2e suite [It] [sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Probing container should mark readiness on pods to false and disable liveness probes while pod is in progress of terminating
Kubernetes e2e suite [It] [sig-node] Probing container should mark readiness on pods to false while pod is in progress of terminating when a pod has a readiness probe
Kubernetes e2e suite [It] [sig-node] Probing container should not be ready with an exec readiness probe timeout [MinimumKubeletVersion:1.20] [NodeConformance]
Kubernetes e2e suite [It] [sig-node] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] RuntimeClass should support RuntimeClasses API operations [Conformance]
Kubernetes e2e suite [It] [sig-node] RuntimeClass should reject a Pod requesting a RuntimeClass with an unconfigured handler [NodeFeature:RuntimeHandler]
Kubernetes e2e suite [It] [sig-node] RuntimeClass should reject a Pod requesting a RuntimeClass with conflicting node selector
Kubernetes e2e suite [It] [sig-node] RuntimeClass should reject a Pod requesting a deleted RuntimeClass [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] RuntimeClass should reject a Pod requesting a non-existent RuntimeClass [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] RuntimeClass should schedule a Pod requesting a RuntimeClass and initialize its Overhead [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] RuntimeClass should schedule a Pod requesting a RuntimeClass without PodOverhead [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] SSH should SSH to all nodes and run commands
Kubernetes e2e suite [It] [sig-node] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Secrets should be consumable via the environment [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Secrets should fail to create secret due to empty secret key [Conformance]
Kubernetes e2e suite [It] [sig-node] Secrets should patch a secret [Conformance]
Kubernetes e2e suite [It] [sig-node] Security Context When creating a container with runAsNonRoot should not run with an explicit root user ID [LinuxOnly]
Kubernetes e2e suite [It] [sig-node] Security Context When creating a container with runAsNonRoot should not run without a specified user ID
Kubernetes e2e suite [It] [sig-node] Security Context When creating a container with runAsNonRoot should run with an explicit non-root user ID [LinuxOnly]
Kubernetes e2e suite [It] [sig-node] Security Context When creating a container with runAsNonRoot should run with an image specified user ID
Kubernetes e2e suite [It] [sig-node] Security Context When creating a container with runAsUser should run the container with uid 0 [LinuxOnly] [NodeConformance]
Kubernetes e2e suite [It] [sig-node] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Security Context When creating a pod with privileged should run the container as privileged when true [LinuxOnly] [NodeFeature:HostAccess]
Kubernetes e2e suite [It] [sig-node] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Security Context When creating a pod with readOnlyRootFilesystem should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance]
Kubernetes e2e suite [It] [sig-node] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Security Context should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]
Kubernetes e2e suite [It] [sig-node] Security Context should support container.SecurityContext.RunAsUser [LinuxOnly]
Kubernetes e2e suite [It] [sig-node] Security Context should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]
Kubernetes e2e suite [It] [sig-node] Security Context should support pod.Spec.SecurityContext.RunAsUser [LinuxOnly]
Kubernetes e2e suite [It] [sig-node] Security Context should support pod.Spec.SecurityContext.SupplementalGroups [LinuxOnly]
Kubernetes e2e suite [It] [sig-node] Security Context should support seccomp default which is unconfined [LinuxOnly]
Kubernetes e2e suite [It] [sig-node] Security Context should support seccomp runtime/default [LinuxOnly]
Kubernetes e2e suite [It] [sig-node] Security Context should support seccomp unconfined on the container [LinuxOnly]
Kubernetes e2e suite [It] [sig-node] Security Context should support seccomp unconfined on the pod [LinuxOnly]
Kubernetes e2e suite [It] [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance]
Kubernetes e2e suite [It] [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when true [LinuxOnly] [NodeConformance]
Kubernetes e2e suite [It] [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Security Context when if the container's primary UID belongs to some groups in the image [LinuxOnly] should add pod.Spec.SecurityContext.SupplementalGroups to them [LinuxOnly] in resultant supplementary groups for the container processes
Kubernetes e2e suite [It] [sig-node] Sysctls [LinuxOnly] [NodeConformance] should not launch unsafe, but not explicitly enabled sysctls on the node [MinimumKubeletVersion:1.21]
Kubernetes e2e suite [It] [sig-node] Sysctls [LinuxOnly] [NodeConformance] should reject invalid sysctls [MinimumKubeletVersion:1.21] [Conformance]
Kubernetes e2e suite [It] [sig-node] Sysctls [LinuxOnly] [NodeConformance] should support sysctls [MinimumKubeletVersion:1.21] [Conformance]
Kubernetes e2e suite [It] [sig-node] Sysctls [LinuxOnly] [NodeConformance] should support sysctls with slashes as separator [MinimumKubeletVersion:1.23]
Kubernetes e2e suite [It] [sig-node] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Variable Expansion should allow substituting values in a volume subpath [Conformance]
Kubernetes e2e suite [It] [sig-node] kubelet Clean up pods on node kubelet should be able to delete 10 pods per node in 1m0s.
Kubernetes e2e suite [It] [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]
Kubernetes e2e suite [It] [sig-scheduling] LimitRange should list, patch and delete a LimitRange by collection [Conformance]
Kubernetes e2e suite [It] [sig-storage] CSI Mock fsgroup as mount option Delegate FSGroup to CSI driver [LinuxOnly] should not pass FSGroup to CSI driver if it is set in pod and driver supports VOLUME_MOUNT_GROUP
Kubernetes e2e suite [It] [sig-storage] CSI Mock fsgroup as mount option Delegate FSGroup to CSI driver [LinuxOnly] should pass FSGroup to CSI driver if it is set in pod and driver supports VOLUME_MOUNT_GROUP
Kubernetes e2e suite [It] [sig-storage] CSI Mock volume attach CSI attach test using mock driver should not require VolumeAttach for drivers without attachment
Kubernetes e2e suite [It] [sig-storage] CSI Mock volume attach CSI attach test using mock driver should preserve attachment policy when no CSIDriver present
Kubernetes e2e suite [It] [sig-storage] CSI Mock volume attach CSI attach test using mock driver should require VolumeAttach for drivers with attachment
Kubernetes e2e suite [It] [sig-storage] CSI Mock volume attach CSI attach test using mock driver should require VolumeAttach for ephemermal volume and drivers with attachment
Kubernetes e2e suite [It] [sig-storage] CSI Mock volume expansion CSI Volume expansion should expand volume by restarting pod if attach=off, nodeExpansion=on
Kubernetes e2e suite [It] [sig-storage] CSI Mock volume expansion CSI Volume expansion should expand volume by restarting pod if attach=on, nodeExpansion=on
Kubernetes e2e suite [It] [sig-storage] CSI Mock volume expansion CSI Volume expansion should expand volume without restarting pod if nodeExpansion=off
Kubernetes e2e suite [It] [sig-storage] CSI Mock volume expansion CSI Volume expansion should not expand volume if resizingOnDriver=off, resizingOnSC=on
Kubernetes e2e suite [It] [sig-storage] CSI Mock volume expansion CSI Volume expansion should not have staging_path missing in node expand volume pod if attach=on, nodeExpansion=on
Kubernetes e2e suite [It] [sig-storage] CSI Mock volume expansion CSI online volume expansion should expand volume without restarting pod if attach=off, nodeExpansion=on
Kubernetes e2e suite [It] [sig-storage] CSI Mock volume expansion CSI online volume expansion should expand volume without restarting pod if attach=on, nodeExpansion=on
Kubernetes e2e suite [It] [sig-storage] CSI Mock volume fsgroup policies CSI FSGroupPolicy [LinuxOnly] should modify fsGroup if fsGroupPolicy=File
Kubernetes e2e suite [It] [sig-storage] CSI Mock volume fsgroup policies CSI FSGroupPolicy [LinuxOnly] should modify fsGroup if fsGroupPolicy=default
Kubernetes e2e suite [It] [sig-storage] CSI Mock volume fsgroup policies CSI FSGroupPolicy [LinuxOnly] should not modify fsGroup if fsGroupPolicy=None
Kubernetes e2e suite [It] [sig-storage] CSI Mock volume service account token CSIServiceAccountToken token should be plumbed down when csiServiceAccountTokenEnabled=true
Kubernetes e2e suite [It] [sig-storage] CSI Mock volume service account token CSIServiceAccountToken token should not be plumbed down when CSIDriver is not deployed
Kubernetes e2e suite [It] [sig-storage] CSI Mock volume service account token CSIServiceAccountToken token should not be plumbed down when csiServiceAccountTokenEnabled=false
Kubernetes e2e suite [It] [sig-storage] CSI Mock volume storage capacity CSIStorageCapacity CSIStorageCapacity disabled
Kubernetes e2e suite [It] [sig-storage] CSI Mock volume storage capacity CSIStorageCapacity CSIStorageCapacity unused
Kubernetes e2e suite [It] [sig-storage] CSI Mock volume storage capacity CSIStorageCapacity CSIStorageCapacity used, have capacity
Kubernetes e2e suite [It] [sig-storage] CSI Mock volume storage capacity CSIStorageCapacity CSIStorageCapacity used, insufficient capacity
Kubernetes e2e suite [It] [sig-storage] CSI Mock volume storage capacity CSIStorageCapacity CSIStorageCapacity used, no capacity
Kubernetes e2e suite [It] [sig-storage] CSI Mock volume storage capacity storage capacity exhausted, immediate binding
Kubernetes e2e suite [It] [sig-storage] CSI Mock volume storage capacity storage capacity exhausted, late binding, no topology
Kubernetes e2e suite [It] [sig-storage] CSI Mock volume storage capacity storage capacity exhausted, late binding, with topology
Kubernetes e2e suite [It] [sig-storage] CSI Mock volume storage capacity storage capacity unlimited
Kubernetes e2e suite [It] [sig-storage] CSI Mock workload info CSI workload information using mock driver contain ephemeral=true when using inline volume
Kubernetes e2e suite [It] [sig-storage] CSI Mock workload info CSI workload information using mock driver should be passed when podInfoOnMount=true
Kubernetes e2e suite [It] [sig-storage] CSI Mock workload info CSI workload information using mock driver should not be passed when CSIDriver does not exist
Kubernetes e2e suite [It] [sig-storage] CSI Mock workload info CSI workload information using mock driver should not be passed when podInfoOnMount=false
Kubernetes e2e suite [It] [sig-storage] CSI Mock workload info CSI workload information using mock driver should not be passed when podInfoOnMount=nil
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should create read-only inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should create read/write inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should support multiple inline ephemeral volumes
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should support two pods which have the same volume definition
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with pvc data source
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] provisioning should mount multiple PV pointing to the same storage on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits should verify that all csinodes have volume limits
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should create read/write inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should support expansion of pvcs created for ephemeral pvcs
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should support multiple inline ephemeral volumes
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should support two pods which have the same volume definition
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should create read-only inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should create read/write inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should support expansion of pvcs created for ephemeral pvcs
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should support two pods which have the same volume definition
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should create read-only inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should create read/write inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support expansion of pvcs created for ephemeral pvcs
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support multiple inline ephemeral volumes
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support two pods which have the same volume definition
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs)] volumeLimits should verify that all csinodes have volume limits
Kubernetes e2e suite [It] [sig-storage] CSIInlineVolumes should support CSIVolumeSource in Pod API [Conformance]
Kubernetes e2e suite [It] [sig-storage] CSIInlineVolumes should support ephemeral VolumeLifecycleMode in CSIDriver API [Conformance]
Kubernetes e2e suite [It] [sig-storage] CSIStorageCapacity should support CSIStorageCapacities API operations [Conformance]
Kubernetes e2e suite [It] [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] ConfigMap should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
Kubernetes e2e suite [It] [sig-storage] ConfigMap should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup]
Kubernetes e2e suite [It] [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
Kubernetes e2e suite [It] [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]
Kubernetes e2e suite [It] [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Downward API volume should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]
Kubernetes e2e suite [It] [sig-storage] Downward API volume should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup]
Kubernetes e2e suite [It] [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Dynamic Provisioning Invalid AWS KMS key should report an error and create no PV
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes pod should support memory backed volumes of specified size
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] files with FSGroup ownership should support (root,0644,tmpfs)
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] new files should be created with FSGroup ownership when container is non-root
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] new files should be created with FSGroup ownership when container is root
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] nonexistent volume subPath should have the correct mode and owner using FSGroup
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] volume on default medium should have the correct mode using FSGroup
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] volume on tmpfs should have the correct mode using FSGroup
Kubernetes e2e suite [It] [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]
Kubernetes e2e suite [It] [sig-storage] Ephemeralstorage When pod refers to non-existent ephemeral storage should allow deletion of pod with invalid volume : configmap
Kubernetes e2e suite [It] [sig-storage] Ephemeralstorage When pod refers to non-existent ephemeral storage should allow deletion of pod with invalid volume : projected
Kubernetes e2e suite [It] [sig-storage] Ephemeralstorage When pod refers to non-existent ephemeral storage should allow deletion of pod with invalid volume : secret
Kubernetes e2e suite [It] [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance]
Kubernetes e2e suite [It] [sig-storage] HostPath should support r/w [NodeConformance]
Kubernetes e2e suite [It] [sig-storage] HostPath should support subPath [NodeConformance]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support existing directory
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support existing directory
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support existing directory
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (block volmode)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (ext4)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] PV Protection Verify "immediate" deletion of a PV that is not bound to a PVC
Kubernetes e2e suite [It] [sig-storage] PV Protection Verify that PV bound to a PVC is not removed immediately
Kubernetes e2e suite [It] [sig-storage] PVC Protection Verify "immediate" deletion of a PVC that is not in active use by a pod
Kubernetes e2e suite [It] [sig-storage] PVC Protection Verify that PVC in active use by a pod is not removed immediately
Kubernetes e2e suite [It] [sig-storage] PVC Protection Verify that scheduling of a pod that uses PVC that is being deleted fails and the pod becomes Unschedulable
Kubernetes e2e suite [It] [sig-storage] PersistentVolumes-expansion loopback local block volume should support online expansion on node
Kubernetes e2e suite [It] [sig-storage] PersistentVolumes-local Pod with node different from PV's NodeAffinity should fail scheduling due to different NodeAffinity
Kubernetes e2e suite [It] [sig-storage] PersistentVolumes-local Pod with node different from PV's NodeAffinity should fail scheduling due to different NodeSelector
Kubernetes e2e suite [It] [sig-storage] PersistentVolumes-local [Volume type: block] One pod requesting one prebound PVC should be able to mount volume and read from pod1
Kubernetes e2e suite [It] [sig-storage] PersistentVolumes-local [Volume type: block] One pod requesting one prebound PVC should be able to mount volume and write from pod1
Kubernetes e2e suite [It] [sig-storage] PersistentVolumes-local [Volume type: block] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2
Kubernetes e2e suite [It] [sig-storage] PersistentVolumes-local [Volume type: block] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2
Kubernetes e2e suite [It] [sig-storage] PersistentVolumes-local [Volume type: blockfswithformat] One pod requesting one prebound PVC should be able to mount volume and read from pod1
Kubernetes e2e suite [It] [sig-storage] PersistentVolumes-local [Volume type: blockfswithformat] One pod requesting one prebound PVC should be able to mount volume and write from pod1
Kubernetes e2e suite [It] [sig-storage] PersistentVolumes-local [Volume type: blockfswithformat] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2
Kubernetes e2e suite [It] [sig-storage] PersistentVolumes-local [Volume type: blockfswithformat] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2
Kubernetes e2e suite [It] [sig-storage] PersistentVolumes-local [Volume type: blockfswithoutformat] One pod requesting one prebound PVC should be able to mount volume and read from pod1
Kubernetes e2e suite [It] [sig-storage] PersistentVolumes-local [Volume type: blockfswithoutformat] One pod requesting one prebound PVC should be able to mount volume and write from pod1
Kubernetes e2e suite [It] [sig-storage] PersistentVolumes-local [Volume type: blockfswithoutformat] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2
Kubernetes e2e suite [It] [sig-storage] PersistentVolumes-local [Volume type: blockfswithoutformat] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2
Kubernetes e2e suite [It] [sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] One pod requesting one prebound PVC should be able to mount volume and read from pod1
Kubernetes e2e suite [It] [sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] One pod requesting one prebound PVC should be able to mount volume and write from pod1
Kubernetes e2e suite [It] [sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2
Kubernetes e2e suite [It] [sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2
Kubernetes e2e suite [It] [sig-storage] PersistentVolumes-local [Volume type: dir-link-bindmounted] One pod requesting one prebound PVC should be able to mount volume and read from pod1
Kubernetes e2e suite [It] [sig-storage] PersistentVolumes-local [Volume type: dir-link-bindmounted] One pod requesting one prebound PVC should be able to mount volume and write from pod1
Kubernetes e2e suite [It] [sig-storage] PersistentVolumes-local [Volume type: dir-link-bindmounted] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2
Kubernetes e2e suite [It] [sig-storage] PersistentVolumes-local [Volume type: dir-link-bindmounted] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2
Kubernetes e2e suite [It] [sig-storage] PersistentVolumes-local [Volume type: dir-link] One pod requesting one prebound PVC should be able to mount volume and read from pod1
Kubernetes e2e suite [It] [sig-storage] PersistentVolumes-local [Volume type: dir-link] One pod requesting one prebound PVC should be able to mount volume and write from pod1
Kubernetes e2e suite [It] [sig-storage] PersistentVolumes-local [Volume type: dir-link] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2
Kubernetes e2e suite [It] [sig-storage] PersistentVolumes-local [Volume type: dir-link] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2
Kubernetes e2e suite [It] [sig-storage] PersistentVolumes-local [Volume type: dir] One pod requesting one prebound PVC should be able to mount volume and read from pod1
Kubernetes e2e suite [It] [sig-storage] PersistentVolumes-local [Volume type: dir] One pod requesting one prebound PVC should be able to mount volume and write from pod1
Kubernetes e2e suite [It] [sig-storage] PersistentVolumes-local [Volume type: dir] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2
Kubernetes e2e suite [It] [sig-storage] PersistentVolumes-local [Volume type: dir] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2
Kubernetes e2e suite [It] [sig-storage] PersistentVolumes-local [Volume type: tmpfs] One pod requesting one prebound PVC should be able to mount volume and read from pod1
Kubernetes e2e suite [It] [sig-storage] PersistentVolumes-local [Volume type: tmpfs] One pod requesting one prebound PVC should be able to mount volume and write from pod1
Kubernetes e2e suite [It] [sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2
Kubernetes e2e suite [It] [sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2
Kubernetes e2e suite [It] [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected configMap should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
Kubernetes e2e suite [It] [sig-storage] Projected configMap should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup]
Kubernetes e2e suite [It] [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
Kubernetes e2e suite [It] [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected downwardAPI should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]
Kubernetes e2e suite [It] [sig-storage] Projected downwardAPI should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup]
Kubernetes e2e suite [It] [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected secret should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance]
Kubernetes e2e suite [It] [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Secrets should be immutable if `immutable` field is set [Conformance]
Kubernetes e2e suite [It] [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [Conformance]
Kubernetes e2e suite [It] [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [Conformance]
Kubernetes e2e suite [It] [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [Conformance]
Kubernetes e2e suite [It] [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [Conformance]
Kubernetes e2e suite [It] [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [Conformance]
Kubernetes e2e suite [It] [sig-storage] Subpath Container restart should verify that container can restart successfully after configmaps modified
Kubernetes e2e suite [It] [sig-storage] Volumes ConfigMap should be mountable
Kubernetes e2e suite [ReportAfterSuite] Kubernetes e2e suite report
Kubernetes e2e suite [ReportBeforeSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
kubetest2 Down
kubetest2 Up
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should create read-only inline ephemeral volume
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should create read/write inline ephemeral volume
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should support expansion of pvcs created for ephemeral pvcs
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should support multiple inline ephemeral volumes
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should support two pods which have the same volume definition
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)] provisioning should mount multiple PV pointing to the same storage on the same node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with any volume data source [Serial]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with mount options
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with pvc data source
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with pvc data source in parallel [Slow]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)] volume-stress multiple pods should access different volumes repeatedly [Slow] [Serial]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] capacity provides storage capacity information
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] provisioning should mount multiple PV pointing to the same storage on the same node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with any volume data source [Serial]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source in parallel [Slow]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] read-write-once-pod[Feature:ReadWriteOncePod] should block a second pod from using an in-use ReadWriteOncePod volume
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] read-write-once-pod[Feature:ReadWriteOncePod] should block a second pod from using an in-use ReadWriteOncePod volume on the same node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] volume-stress multiple pods should access different volumes repeatedly [Slow] [Serial]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (delayed binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ext3)] volumes should store data
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down is usable by a new pod when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down is usable by a new pod with a different SELinux context when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down is usable by a new pod when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down is usable by a new pod with a different SELinux context when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (filesystem volmode)] volume-lifecycle-performance should provision volumes at scale within performance constraints [Slow] [Serial]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits should support volume limits [Serial]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (immediate binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should mount multiple PV pointing to the same storage on the same node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with any volume data source [Serial]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with mount options
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with pvc data source
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with pvc data source in parallel [Slow]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support existing directory
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support non-existent path
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes should store data
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (xfs)][Slow] volumes should store data
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic Snapshot (delete policy)] snapshottable-stress[Feature:VolumeSnapshotDataSource] should support snapshotting of many volumes repeatedly [Slow] [Serial]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works, check deletion (ephemeral)
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic Snapshot (retain policy)] snapshottable-stress[Feature:VolumeSnapshotDataSource] should support snapshotting of many volumes repeatedly [Slow] [Serial]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works, check deletion (ephemeral)
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Ephemeral Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Ephemeral Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works, check deletion (ephemeral)
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Ephemeral Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Ephemeral Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works, check deletion (ephemeral)
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should create read-only inline ephemeral volume
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should create read-only inline ephemeral volume
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should create read/write inline ephemeral volume
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should support expansion of pvcs created for ephemeral pvcs
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should support multiple inline ephemeral volumes
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should support two pods which have the same volume definition
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Generic Ephemeral-volume (default fs)] volumeLimits should support volume limits [Serial]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (default fs)] subPath should support existing directory
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (default fs)] volumes should store data
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (ext3)] volumes should store data
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (ext4)] volumes should store data
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes should store data
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (xfs)][Slow] volumes should store data
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (block volmode)] volumes should store data
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (ext3)] volumes should store data
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (ext4)] volumes should store data
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down is usable by a new pod when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down is usable by a new pod with a different SELinux context when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down is usable by a new pod when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down is usable by a new pod with a different SELinux context when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes should store data
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (xfs)][Slow] volumes should store data
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works, check deletion (ephemeral)
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works, check deletion (ephemeral)
Kubernetes e2e suite [It] [sig-api-machinery] API priority and fairness should ensure that requests can't be drowned out (fairness)
Kubernetes e2e suite [It] [sig-api-machinery] API priority and fairness should ensure that requests can't be drowned out (priority)
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] [Flaky] kubectl explain works for CR with the same resource name as built-in object.
Kubernetes e2e suite [It] [sig-api-machinery] Etcd failure [Disruptive] should recover from SIGKILL
Kubernetes e2e suite [It] [sig-api-machinery] Etcd failure [Disruptive] should recover from network partition with master
Kubernetes e2e suite [It] [sig-api-machinery] Namespaces [Serial] should always delete fast (ALL of 100 namespaces in 150 seconds) [Feature:ComprehensiveNamespaceDraining]
Kubernetes e2e suite [It] [sig-api-machinery] Namespaces [Serial] should apply a finalizer to a Namespace [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Namespaces [Serial] should apply an update to a Namespace [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Namespaces [Serial] should apply changes to a namespace status [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Namespaces [Serial] should delete fast enough (90 percent of 100 namespaces in 150 seconds)
Kubernetes e2e suite [It] [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota [Feature:PodPriority] should verify ResourceQuota's multiple priority class scope (quota set to pod count: 2) against 2 pods with same priority classes.
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota [Feature:PodPriority] should verify ResourceQuota's priority class scope (cpu, memory quota set) against a pod with same priority class.
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota [Feature:PodPriority] should verify ResourceQuota's priority class scope (quota set to pod count: 1) against 2 pods with different priority class.
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota [Feature:PodPriority] should verify ResourceQuota's priority class scope (quota set to pod count: 1) against 2 pods with same priority class.
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota [Feature:PodPriority] should verify ResourceQuota's priority class scope (quota set to pod count: 1) against a pod with different priority class (ScopeSelectorOpExists).
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota [Feature:PodPriority] should verify ResourceQuota's priority class scope (quota set to pod count: 1) against a pod with different priority class (ScopeSelectorOpNotIn).
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota [Feature:PodPriority] should verify ResourceQuota's priority class scope (quota set to pod count: 1) against a pod with same priority class.
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota [Feature:ScopeSelectors] should verify ResourceQuota with best effort scope using scope-selectors.
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota [Feature:ScopeSelectors] should verify ResourceQuota with terminating scopes through scope selectors.
Kubernetes e2e suite [It] [sig-api-machinery] Servers with support for API chunking should support continue listing from the last key if the original version has been compacted away, though the list is inconsistent [Slow]
Kubernetes e2e suite [It] [sig-api-machinery] StorageVersion resources [Feature:StorageVersionAPI] storage version with non-existing id should be GC'ed
Kubernetes e2e suite [It] [sig-api-machinery] kube-apiserver identity [Feature:APIServerIdentity] kube-apiserver identity should persist after restart [Disruptive]
Kubernetes e2e suite [It] [sig-apps] ControllerRevision [Serial] should manage the lifecycle of a ControllerRevision [Conformance]
Kubernetes e2e suite [It] [sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]
Kubernetes e2e suite [It] [sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]
Kubernetes e2e suite [It] [sig-apps] Daemon set [Serial] should list and delete a collection of DaemonSets [Conformance]
Kubernetes e2e suite [It] [sig-apps] Daemon set [Serial] should not update pod when spec was updated and update strategy is OnDelete
Kubernetes e2e suite [It] [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]
Kubernetes e2e suite [It] [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]
Kubernetes e2e suite [It] [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]
Kubernetes e2e suite [It] [sig-apps] Daemon set [Serial] should run and stop complex daemon with node affinity
Kubernetes e2e suite [It] [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]
Kubernetes e2e suite [It] [sig-apps] Daemon set [Serial] should surge pods onto nodes when spec was updated and update strategy is RollingUpdate
Kubernetes e2e suite [It] [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
Kubernetes e2e suite [It] [sig-apps] Daemon set [Serial] should verify changes to a daemon set status [Conformance]
Kubernetes e2e suite [It] [sig-apps] DaemonRestart [Disruptive] Controller Manager should not create/delete replicas across restart
Kubernetes e2e suite [It] [sig-apps] DaemonRestart [Disruptive] Kube-proxy should recover after being killed accidentally
Kubernetes e2e suite [It] [sig-apps] DaemonRestart [Disruptive] Kubelet should not restart containers across restart
Kubernetes e2e suite [It] [sig-apps] DaemonRestart [Disruptive] Scheduler should continue assigning pods to nodes across restart
Kubernetes e2e suite [It] [sig-apps] DisruptionController evictions: maxUnavailable deny evictions, integer => should not allow an eviction [Serial]
Kubernetes e2e suite [It] [sig-apps] DisruptionController evictions: too few pods, replicaSet, percentage => should not allow an eviction [Serial]
Kubernetes e2e suite [It] [sig-apps] Job should run a job to completion with CPU requests [Serial]
Kubernetes e2e suite [It] [sig-apps] ReplicaSet should serve a basic image on each replica with a private image
Kubernetes e2e suite [It] [sig-apps] ReplicationController should serve a basic image on each replica with a private image
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
Kubernetes e2e suite [It] [sig-apps] StatefulSet Deploy clustered applications [Feature:StatefulSet] [Slow] should creating a working CockroachDB cluster
Kubernetes e2e suite [It] [sig-apps] StatefulSet Deploy clustered applications [Feature:StatefulSet] [Slow] should creating a working mysql cluster
Kubernetes e2e suite [It] [sig-apps] StatefulSet Deploy clustered applications [Feature:StatefulSet] [Slow] should creating a working redis cluster
Kubernetes e2e suite [It] [sig-apps] StatefulSet Deploy clustered applications [Feature:StatefulSet] [Slow] should creating a working zookeeper cluster
Kubernetes e2e suite [It] [sig-apps] StatefulSet Non-retain StatefulSetPersistentVolumeClaimPolicy [Feature:StatefulSetAutoDeletePVC] should delete PVCs after adopting pod (WhenDeleted)
Kubernetes e2e suite [It] [sig-apps] StatefulSet Non-retain StatefulSetPersistentVolumeClaimPolicy [Feature:StatefulSetAutoDeletePVC] should delete PVCs after adopting pod (WhenScaled) [Feature:StatefulSetAutoDeletePVC]
Kubernetes e2e suite [It] [sig-apps] StatefulSet Non-retain StatefulSetPersistentVolumeClaimPolicy [Feature:StatefulSetAutoDeletePVC] should delete PVCs with a OnScaledown policy
Kubernetes e2e suite [It] [sig-apps] StatefulSet Non-retain StatefulSetPersistentVolumeClaimPolicy [Feature:StatefulSetAutoDeletePVC] should delete PVCs with a WhenDeleted policy
Kubernetes e2e suite [It] [sig-apps] stateful Upgrade [Feature:StatefulUpgrade] stateful upgrade should maintain a functioning cluster
Kubernetes e2e suite [It] [sig-auth] SelfSubjectReview [Feature:APISelfSubjectReview] should support SelfSubjectReview API operations
Kubernetes e2e suite [It] [sig-auth] ServiceAccount admission controller migration [Feature:BoundServiceAccountTokenVolume] master upgrade should maintain a functioning cluster
Kubernetes e2e suite [It] [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow]
Kubernetes e2e suite [It] [sig-auth] [Feature:NodeAuthenticator] The kubelet can delegate ServiceAccount tokens to the API server
Kubernetes e2e suite [It] [sig-auth] [Feature:NodeAuthenticator] The kubelet's main port 10250 should reject requests with no credentials
Kubernetes e2e suite [It] [sig-auth] [Feature:NodeAuthorizer] A node shouldn't be able to create another node
Kubernetes e2e suite [It] [sig-auth] [Feature:NodeAuthorizer] A node shouldn't be able to delete another node
Kubernetes e2e suite [It] [sig-auth] [Feature:NodeAuthorizer] Getting a non-existent configmap should exit with the Forbidden error, not a NotFound error
Kubernetes e2e suite [It] [sig-auth] [Feature:NodeAuthorizer] Getting a non-existent secret should exit with the Forbidden error, not a NotFound error
Kubernetes e2e suite [It] [sig-auth] [Feature:NodeAuthorizer] Getting a secret for a workload the node has access to should succeed
Kubernetes e2e suite [It] [sig-auth] [Feature:NodeAuthorizer] Getting an existing configmap should exit with the Forbidden error
Kubernetes e2e suite [It] [sig-auth] [Feature:NodeAuthorizer] Getting an existing secret should exit with the Forbidden error
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaler scalability [Slow] CA ignores unschedulable pods while scheduling schedulable pods [Feature:ClusterAutoscalerScalability6]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaler scalability [Slow] should scale down empty nodes [Feature:ClusterAutoscalerScalability3]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaler scalability [Slow] should scale down underutilized nodes [Feature:ClusterAutoscalerScalability4]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaler scalability [Slow] should scale up at all [Feature:ClusterAutoscalerScalability1]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaler scalability [Slow] should scale up twice [Feature:ClusterAutoscalerScalability2]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaler scalability [Slow] shouldn't scale down with underutilized nodes due to host port conflicts [Feature:ClusterAutoscalerScalability5]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] Should be able to scale a node group down to 0[Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] Should be able to scale a node group up from 0[Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] Should not scale GPU pool up if pod does not require GPUs [GpuType:] [Feature:ClusterSizeAutoscalingGpu]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] Should scale down GPU pool from 1 [GpuType:] [Feature:ClusterSizeAutoscalingGpu]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] Should scale up GPU pool from 0 [GpuType:] [Feature:ClusterSizeAutoscalingGpu]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] Should scale up GPU pool from 1 [GpuType:] [Feature:ClusterSizeAutoscalingGpu]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] Shouldn't perform scale up operation and should list unhealthy status if most of the cluster is broken[Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should add node to the particular mig [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should be able to scale down by draining multiple pods one by one as dictated by pdb[Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should be able to scale down by draining system pods with pdb[Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should be able to scale down when rescheduling a pod is required and pdb allows for it[Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should correctly scale down after a node is not needed [Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should correctly scale down after a node is not needed and one node is broken [Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should correctly scale down after a node is not needed when there is non autoscaled pool[Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should disable node pool autoscaling [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should increase cluster size if pending pods are small [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should increase cluster size if pending pods are small and one node is broken [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should increase cluster size if pending pods are small and there is another node pool that is not autoscaled [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should increase cluster size if pod requesting EmptyDir volume is pending [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should increase cluster size if pod requesting volume is pending [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should increase cluster size if pods are pending due to host port conflict [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should increase cluster size if pods are pending due to pod anti-affinity [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should scale down when expendable pod is running [Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should scale up correct target pool [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should scale up when non expendable pod is created [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't be able to scale down when rescheduling a pod is required, but pdb doesn't allow drain[Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't increase cluster size if pending pod is too large [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't scale down when non expendable pod is running [Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't scale up when expendable pod is created [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't scale up when expendable pod is preempted [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't trigger additional scale-ups during processing scale-up [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] DNS horizontal autoscaling [Serial] [Slow] kube-dns-autoscaler should scale kube-dns pods when cluster size changed
Kubernetes e2e suite [It] [sig-autoscaling] DNS horizontal autoscaling kube-dns-autoscaler should scale kube-dns pods in both nonfaulty and faulty scenarios
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:ClusterSizeAutoscalingScaleUp] [Slow] Autoscaling Autoscaling a service from 1 pod and 3 nodes to 8 pods and >=4 nodes takes less than 15 minutes
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) CustomResourceDefinition Should scale with a CRD targetRef
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) ReplicationController light Should scale from 1 pod to 2 pods
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) ReplicationController light [Slow] Should scale from 2 pods to 1 pod
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] Deployment (Container Resource) Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods using Average Utilization for aggregation
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] Deployment (Container Resource) Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods using Average Value for aggregation
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] Deployment (Pod Resource) Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods using Average Utilization for aggregation
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] Deployment (Pod Resource) Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods using Average Value for aggregation
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] Deployment (Pod Resource) Should scale from 5 pods to 3 pods and then from 3 pods to 1 pod using Average Utilization for aggregation
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicaSet Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicaSet Should scale from 5 pods to 3 pods and then from 3 pods to 1 pod
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicaSet with idle sidecar (ContainerResource use case) Should not scale up on a busy sidecar with an idle application
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicaSet with idle sidecar (ContainerResource use case) Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods on a busy application with an idle sidecar container
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicationController Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods and verify decision stability
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicationController Should scale from 5 pods to 3 pods and then from 3 pods to 1 pod and verify decision stability
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: Memory) [Serial] [Slow] Deployment (Container Resource) Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods using Average Utilization for aggregation
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: Memory) [Serial] [Slow] Deployment (Container Resource) Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods using Average Value for aggregation
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: Memory) [Serial] [Slow] Deployment (Pod Resource) Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods using Average Utilization for aggregation
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: Memory) [Serial] [Slow] Deployment (Pod Resource) Should scale from 1 pod to 3 pods and then from 3 pods to 5 pods using Average Value for aggregation
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) with autoscaling disabled shouldn't scale down
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) with autoscaling disabled shouldn't scale up
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) with both scale up and down controls configured should keep recommendation within the range over two stabilization windows
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) with both scale up and down controls configured should keep recommendation within the range with stabilization window and pod limit rate
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) with long upscale stabilization window should scale up only after the stabilization period
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) with scale limited by number of Pods rate should scale down no more than given number of Pods per minute
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) with scale limited by number of Pods rate should scale up no more than given number of Pods per minute
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) with scale limited by percentage should scale down no more than given percentage of current Pods per minute
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) with scale limited by percentage should scale up no more than given percentage of current Pods per minute
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) with short downscale stabilization window should scale down soon after the stabilization period
Kubernetes e2e suite [It] [sig-autoscaling] [HPA] [Feature:CustomMetricsAutoscaling] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) with Custom Metric of type Object from Stackdriver should scale down
Kubernetes e2e suite [It] [sig-autoscaling] [HPA] [Feature:CustomMetricsAutoscaling] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) with Custom Metric of type Object from Stackdriver should scale down to 0
Kubernetes e2e suite [It] [sig-autoscaling] [HPA] [Feature:CustomMetricsAutoscaling] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) with Custom Metric of type Pod from Stackdriver should scale down
Kubernetes e2e suite [It] [sig-autoscaling] [HPA] [Feature:CustomMetricsAutoscaling] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) with Custom Metric of type Pod from Stackdriver should scale down with Prometheus
Kubernetes e2e suite [It] [sig-autoscaling] [HPA] [Feature:CustomMetricsAutoscaling] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) with Custom Metric of type Pod from Stackdriver should scale up with two metrics
Kubernetes e2e suite [It] [sig-autoscaling] [HPA] [Feature:CustomMetricsAutoscaling] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) with External Metric from Stackdriver should scale down with target average value
Kubernetes e2e suite [It] [sig-autoscaling] [HPA] [Feature:CustomMetricsAutoscaling] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) with External Metric from Stackdriver should scale down with target value
Kubernetes e2e suite [It] [sig-autoscaling] [HPA] [Feature:CustomMetricsAutoscaling] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) with External Metric from Stackdriver should scale up with two metrics
Kubernetes e2e suite [It] [sig-autoscaling] [HPA] [Feature:CustomMetricsAutoscaling] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) with multiple metrics of different types should not scale down when one metric is missing (Container Resource and External Metrics)
Kubernetes e2e suite [It] [sig-autoscaling] [HPA] [Feature:CustomMetricsAutoscaling] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) with multiple metrics of different types should not scale down when one metric is missing (Pod and Object Metrics)
Kubernetes e2e suite [It] [sig-autoscaling] [HPA] [Feature:CustomMetricsAutoscaling] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) with multiple metrics of different types should scale up when one metric is missing (Pod and External metrics)
Kubernetes e2e suite [It] [sig-autoscaling] [HPA] [Feature:CustomMetricsAutoscaling] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) with multiple metrics of different types should scale up when one metric is missing (Resource and Object metrics)
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl taint [Serial] should remove all the taints with the same key off a node
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl taint [Serial] should update the taint on a node
Kubernetes e2e suite [It] [sig-cli] Kubectl client Simple pod Kubectl run [Slow] running a failing command with --leave-stdin-open
Kubernetes e2e suite [It] [sig-cli] Kubectl client Simple pod Kubectl run [Slow] running a failing command without --restart=Never
Kubernetes e2e suite [It] [sig-cli] Kubectl client Simple pod Kubectl run [Slow] running a failing command without --restart=Never, but with --rm
Kubernetes e2e suite [It] [sig-cli] Kubectl client Simple pod should return command exit codes should handle in-cluster config
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Addon update should propagate add-on file changes [Slow]
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Downgrade [Feature:Downgrade] cluster downgrade should maintain a functioning cluster [Feature:ClusterDowngrade]
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] GKE node pools [Feature:GKENodePool] should create a cluster with multiple node pools [Feature:GKENodePool]
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] HA-master [Feature:HAMaster] survive addition/removal replicas different zones [Serial][Disruptive]
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] HA-master [Feature:HAMaster] survive addition/removal replicas multizone workers [Serial][Disruptive]
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] HA-master [Feature:HAMaster] survive addition/removal replicas same zone [Serial][Disruptive]
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Nodes [Disruptive] Resize [Slow] should be able to add nodes
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Nodes [Disruptive] Resize [Slow] should be able to delete nodes
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Ports Security Check [Feature:KubeletSecurity] should not be able to proxy to cadvisor port 4194 using proxy subresource
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Ports Security Check [Feature:KubeletSecurity] should not be able to proxy to the readonly kubelet port 10255 using proxy subresource
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Ports Security Check [Feature:KubeletSecurity] should not have port 10255 open on its all public IP addresses
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Ports Security Check [Feature:KubeletSecurity] should not have port 4194 open on its all public IP addresses
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] each node by dropping all inbound packets for a while and ensure they function afterwards
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] each node by dropping all outbound packets for a while and ensure they function afterwards
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] each node by ordering clean reboot and ensure they function upon restart
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] each node by ordering unclean reboot and ensure they function upon restart
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] each node by switching off the network interface and ensure they function upon switch on
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] each node by triggering kernel panic and ensure they function upon restart
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Recreate [Feature:Recreate] recreate nodes and ensure they function upon restart
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Restart [Disruptive] should restart all nodes and ensure all nodes and pods recover
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Upgrade [Feature:Upgrade] cluster upgrade should maintain a functioning cluster [Feature:ClusterUpgrade]
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Upgrade [Feature:Upgrade] master upgrade should maintain a functioning cluster [Feature:MasterUpgrade]
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] [Disruptive]NodeLease NodeLease deletion node lease should be deleted when corresponding node is deleted
Kubernetes e2e suite [It] [sig-cloud-provider] [Feature:CloudProvider][Disruptive] Nodes should be deleted on API server if it doesn't exist in the cloud provider
Kubernetes e2e suite [It] [sig-cluster-lifecycle] [Feature:BootstrapTokens] should delete the signed bootstrap tokens from clusterInfo ConfigMap when bootstrap token is deleted
Kubernetes e2e suite [It] [sig-cluster-lifecycle] [Feature:BootstrapTokens] should delete the token secret when the secret expired
Kubernetes e2e suite [It] [sig-cluster-lifecycle] [Feature:BootstrapTokens] should not delete the token secret when the secret is not expired
Kubernetes e2e suite [It] [sig-cluster-lifecycle] [Feature:BootstrapTokens] should resign the bootstrap tokens when the clusterInfo ConfigMap updated [Serial][Disruptive]
Kubernetes e2e suite [It] [sig-cluster-lifecycle] [Feature:BootstrapTokens] should sign the new added bootstrap tokens
Kubernetes e2e suite [It] [sig-instrumentation] Logging soak [Performance] [Slow] [Disruptive] should survive logging 1KB every 1s seconds, for a duration of 2m0s
Kubernetes e2e suite [It] [sig-instrumentation] Stackdriver Monitoring should have accelerator metrics [Feature:StackdriverAcceleratorMonitoring]
Kubernetes e2e suite [It] [sig-instrumentation] Stackdriver Monitoring should have cluster metrics [Feature:StackdriverMonitoring]
Kubernetes e2e suite [It] [sig-instrumentation] Stackdriver Monitoring should run Custom Metrics - Stackdriver Adapter for external metrics [Feature:StackdriverExternalMetrics]
Kubernetes e2e suite [It] [sig-instrumentation] Stackdriver Monitoring should run Custom Metrics - Stackdriver Adapter for new resource model [Feature:StackdriverCustomMetrics]
Kubernetes e2e suite [It] [sig-instrumentation] Stackdriver Monitoring should run Custom Metrics - Stackdriver Adapter for old resource model [Feature:StackdriverCustomMetrics]
Kubernetes e2e suite [It] [sig-instrumentation] Stackdriver Monitoring should run Stackdriver Metadata Agent [Feature:StackdriverMetadataAgent]
Kubernetes e2e suite [It] [sig-network] ClusterDns [Feature:Example] should create pod that uses dns
Kubernetes e2e suite [It] [sig-network] DNS configMap nameserver Change stubDomain should be able to change stubDomain configuration [Slow][Serial]
Kubernetes e2e suite [It] [sig-network] DNS configMap nameserver Forward PTR lookup should forward PTR records lookup to upstream nameserver [Slow][Serial]
Kubernetes e2e suite [It] [sig-network] DNS configMap nameserver Forward external name lookup should forward externalname lookup to upstream nameserver [Slow][Serial]
Kubernetes e2e suite [It] [sig-network] DNS should provide DNS for the cluster [Provider:GCE]
Kubernetes e2e suite [It] [sig-network] Firewall rule [Slow] [Serial] should create valid firewall rules for LoadBalancer type service
Kubernetes e2e suite [It] [sig-network] Firewall rule control plane should not expose well-known ports
Kubernetes e2e suite [It] [sig-network] Firewall rule should have correct firewall rules for e2e cluster
Kubernetes e2e suite [It] [sig-network] HostPort validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance]
Kubernetes e2e suite [It] [sig-network] IngressClass [Feature:Ingress] should allow IngressClass to have Namespace-scoped parameters [Serial]
Kubernetes e2e suite [It] [sig-network] IngressClass [Feature:Ingress] should choose the one with the later CreationTimestamp, if equal the one with the lower name when two ingressClasses are marked as default[Serial]
Kubernetes e2e suite [It] [sig-network] IngressClass [Feature:Ingress] should not set default value if no default IngressClass [Serial]
Kubernetes e2e suite [It] [sig-network] IngressClass [Feature:Ingress] should set default value on new IngressClass [Serial]
Kubernetes e2e suite [It] [sig-network] KubeProxy should set TCP CLOSE_WAIT timeout [Privileged]
Kubernetes e2e suite [It] [sig-network] LoadBalancers ESIPP [Slow] should handle updates to ExternalTrafficPolicy field
Kubernetes e2e suite [It] [sig-network] LoadBalancers ESIPP [Slow] should only target nodes with endpoints
Kubernetes e2e suite [It] [sig-network] LoadBalancers ESIPP [Slow] should work for type=LoadBalancer
Kubernetes e2e suite [It] [sig-network] LoadBalancers ESIPP [Slow] should work for type=NodePort
Kubernetes e2e suite [It] [sig-network] LoadBalancers ESIPP [Slow] should work from pods
Kubernetes e2e suite [It] [sig-network] LoadBalancers should be able to change the type and ports of a TCP service [Slow]
Kubernetes e2e suite [It] [sig-network] LoadBalancers should be able to change the type and ports of a UDP service [Slow]
Kubernetes e2e suite [It] [sig-network] LoadBalancers should be able to create LoadBalancer Service without NodePort and change it [Slow]
Kubernetes e2e suite [It] [sig-network] LoadBalancers should be able to create an internal type load balancer [Slow]
Kubernetes e2e suite [It] [sig-network] LoadBalancers should be able to preserve UDP traffic when server pod cycles for a LoadBalancer service on different nodes
Kubernetes e2e suite [It] [sig-network] LoadBalancers should be able to preserve UDP traffic when server pod cycles for a LoadBalancer service on the same nodes
Kubernetes e2e suite [It] [sig-network] LoadBalancers should be able to switch session affinity for LoadBalancer service with ESIPP off [Slow] [LinuxOnly]
Kubernetes e2e suite [It] [sig-network] LoadBalancers should be able to switch session affinity for LoadBalancer service with ESIPP on [Slow] [LinuxOnly]
Kubernetes e2e suite [It] [sig-network] LoadBalancers should handle load balancer cleanup finalizer for service [Slow]
Kubernetes e2e suite [It] [sig-network] LoadBalancers should have session affinity work for LoadBalancer service with ESIPP off [Slow] [LinuxOnly]
Kubernetes e2e suite [It] [sig-network] LoadBalancers should have session affinity work for LoadBalancer service with ESIPP on [Slow] [LinuxOnly]
Kubernetes e2e suite [It] [sig-network] LoadBalancers should not have connectivity disruption during rolling update with externalTrafficPolicy=Cluster [Slow]
Kubernetes e2e suite [It] [sig-network] LoadBalancers should not have connectivity disruption during rolling update with externalTrafficPolicy=Local [Slow]
Kubernetes e2e suite [It] [sig-network] LoadBalancers should only allow access from service loadbalancer source ranges [Slow]
Kubernetes e2e suite [It] [sig-network] LoadBalancers should reconcile LB health check interval [Slow][Serial][Disruptive]
Kubernetes e2e suite [It] [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:Ingress] should conform to Ingress spec
Kubernetes e2e suite [It] [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:NEG] rolling update backend pods should not cause service disruption
Kubernetes e2e suite [It] [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:NEG] should be able to create a ClusterIP service
Kubernetes e2e suite [It] [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:NEG] should be able to switch between IG and NEG modes
Kubernetes e2e suite [It] [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:NEG] should conform to Ingress spec
Kubernetes e2e suite [It] [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:NEG] should create NEGs for all ports with the Ingress annotation, and NEGs for the standalone annotation otherwise
Kubernetes e2e suite [It] [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:NEG] should sync endpoints for both Ingress-referenced NEG and standalone NEG
Kubernetes e2e suite [It] [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:NEG] should sync endpoints to NEG
Kubernetes e2e suite [It] [sig-network] Loadbalancing: L7 Scalability GCE [Slow] [Serial] [Feature:IngressScale] Creating and updating ingresses should happen promptly with small/medium/large amount of ingresses
Kubernetes e2e suite [It] [sig-network] Netpol API should support creating NetworkPolicy with Status subresource [Feature:NetworkPolicyStatus]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should allow egress access on one named port [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should allow egress access to server in CIDR block [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should allow ingress access from namespace on one named port [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should allow ingress access from updated namespace [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should allow ingress access from updated pod [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should allow ingress access on one named port [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should deny egress from all pods in a namespace [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should deny egress from pods based on PodSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should deny ingress access to updated pod [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should deny ingress from pods on other namespaces [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce egress policy allowing traffic to a server in a different namespace based on PodSelector and NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce except clause while egress access to server in CIDR block [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce ingress policy allowing any port traffic to a server on a specific protocol [Feature:NetworkPolicy] [Feature:UDP]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce multiple egress policies with egress allow-all policy taking precedence [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce multiple ingress policies with ingress allow-all policy taking precedence [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce multiple, stacked policies with overlapping podSelectors [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce policies to check ingress and egress policies can be controlled independently based on PodSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce policy based on Multiple PodSelectors and NamespaceSelectors [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce policy based on NamespaceSelector with MatchExpressions using default ns label [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce policy based on NamespaceSelector with MatchExpressions[Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce policy based on PodSelector and NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce policy based on PodSelector or NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce policy based on PodSelector with MatchExpressions[Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce policy based on Ports [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce policy based on any PodSelectors [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce policy to allow ingress traffic for a target [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce policy to allow ingress traffic from pods in all namespaces [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce policy to allow traffic based on NamespaceSelector with MatchLabels using default ns label [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce policy to allow traffic from pods within server namespace based on PodSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce policy to allow traffic only from a different namespace, based on NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce policy to allow traffic only from a pod in a different namespace based on PodSelector and NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce updated policy [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should ensure an IP overlapping both IPBlock.CIDR and IPBlock.Except is allowed [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should not allow access by TCP when a policy specifies only UDP [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should not mistakenly treat 'protocol: SCTP' as 'protocol: TCP', even if the plugin doesn't support SCTP [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should properly isolate pods that are selected by a policy allowing SCTP, even if the plugin doesn't support SCTP [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should stop enforcing policies after they are deleted [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should support a 'default-deny-all' policy [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should support a 'default-deny-ingress' policy [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should support allow-all policy [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should support denying of egress traffic on the client side (even if the server explicitly allows this traffic) [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should work with Ingress, Egress specified together [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol [Feature:SCTPConnectivity][LinuxOnly] NetworkPolicy between server and client using SCTP should enforce policy based on Ports [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol [Feature:SCTPConnectivity][LinuxOnly] NetworkPolicy between server and client using SCTP should enforce policy to allow traffic only from a pod in a different namespace based on PodSelector and NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol [Feature:SCTPConnectivity][LinuxOnly] NetworkPolicy between server and client using SCTP should support a 'default-deny-ingress' policy [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol [LinuxOnly] NetworkPolicy between server and client using UDP should enforce policy based on Ports [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol [LinuxOnly] NetworkPolicy between server and client using UDP should enforce policy to allow traffic only from a pod in a different namespace based on PodSelector and NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol [LinuxOnly] NetworkPolicy between server and client using UDP should support a 'default-deny-ingress' policy [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicy [Feature:SCTPConnectivity][LinuxOnly] NetworkPolicy between server and client using SCTP should enforce policy based on Ports [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicy [Feature:SCTPConnectivity][LinuxOnly] NetworkPolicy between server and client using SCTP should enforce policy to allow traffic only from a pod in a different namespace based on PodSelector and NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicy [Feature:SCTPConnectivity][LinuxOnly] NetworkPolicy between server and client using SCTP should support a 'default-deny' policy [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should allow egress access on one named port [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should allow egress access to server in CIDR block [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should allow ingress access from namespace on one named port [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should allow ingress access from updated namespace [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should allow ingress access from updated pod [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should allow ingress access on one named port [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should deny ingress access to updated pod [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce egress policy allowing traffic to a server in a different namespace based on PodSelector and NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce except clause while egress access to server in CIDR block [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce multiple egress policies with egress allow-all policy taking precedence [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce multiple ingress policies with ingress allow-all policy taking precedence [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce multiple, stacked policies with overlapping podSelectors [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policies to check ingress and egress policies can be controlled independently based on PodSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy based on NamespaceSelector with MatchExpressions[Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy based on PodSelector and NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy based on PodSelector or NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy based on PodSelector with MatchExpressions[Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy based on Ports [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy to allow traffic from pods within server namespace based on PodSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy to allow traffic only from a different namespace, based on NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy to allow traffic only from a pod in a different namespace based on PodSelector and NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce updated policy [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should ensure an IP overlapping both IPBlock.CIDR and IPBlock.Except is allowed [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should not allow access by TCP when a policy specifies only SCTP [Feature:NetworkPolicy] [Serial]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should stop enforcing policies after they are deleted [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should support a 'default-deny-all' policy [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should support a 'default-deny-ingress' policy [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should support allow-all policy [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should work with Ingress,Egress specified together [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: sctp [LinuxOnly][Feature:SCTPConnectivity]
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Pods should function for node-pod communication: sctp [LinuxOnly][Feature:SCTPConnectivity]
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should be able to handle large requests: http
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should be able to handle large requests: udp
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should function for client IP based session affinity: http [LinuxOnly]
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should function for client IP based session affinity: udp [LinuxOnly]
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should function for endpoint-Service: http
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should function for endpoint-Service: sctp [Feature:SCTPConnectivity]
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should function for endpoint-Service: udp
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should function for multiple endpoint-Services with same selector
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should function for node-Service: http
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should function for node-Service: sctp [Feature:SCTPConnectivity]
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should function for node-Service: udp
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should function for pod-Service: http
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should function for pod-Service: sctp [Feature:SCTPConnectivity]
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should function for pod-Service: udp
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should function for service endpoints using hostNetwork
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should support basic nodePort: udp functionality
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should update endpoints: http
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should update endpoints: udp
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should update nodePort: http [Slow]
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should update nodePort: udp [Slow]
Kubernetes e2e suite [It] [sig-network] Networking IPerf2 [Feature:Networking-Performance] should run iperf2
Kubernetes e2e suite [It] [sig-network] Networking should allow creating a Pod with an SCTP HostPort [LinuxOnly] [Serial]
Kubernetes e2e suite [It] [sig-network] Networking should check kube-proxy urls
Kubernetes e2e suite [It] [sig-network] Networking should provide Internet connection for containers [Feature:Networking-IPv4]
Kubernetes e2e suite [It] [sig-network] Networking should provide Internet connection for containers [Feature:Networking-IPv6][Experimental][LinuxOnly]
Kubernetes e2e suite [It] [sig-network] Networking should provider Internet connection for containers using DNS [Feature:Networking-DNS]
Kubernetes e2e suite [It] [sig-network] Networking should recreate its iptables rules if they are deleted [Disruptive]
Kubernetes e2e suite [It] [sig-network] NoSNAT [Feature:NoSNAT] [Slow] Should be able to send traffic between Pods without SNAT
Kubernetes e2e suite [It] [sig-network] Services GCE [Slow] should be able to create and tear down a standard-tier load balancer [Slow]
Kubernetes e2e suite [It] [sig-network] Services should allow creating a basic SCTP service with pod and endpoints [LinuxOnly] [Serial]
Kubernetes e2e suite [It] [sig-network] Services should be able to connect to terminating and unready endpoints if PublishNotReadyAddresses is true
Kubernetes e2e suite [It] [sig-network] Services should be able to update service type to NodePort listening on same port number but different protocols
Kubernetes e2e suite [It] [sig-network] Services should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node
Kubernetes e2e suite [It] [sig-network] Services should be rejected for evicted pods (no endpoints exist)
Kubernetes e2e suite [It] [sig-network] Services should be rejected when no endpoints exist
Kubernetes e2e suite [It] [sig-network] Services should create endpoints for unready pods
Kubernetes e2e suite [It] [sig-network] Services should fail health check node port if there are only terminating endpoints [Feature:ProxyTerminatingEndpoints]
Kubernetes e2e suite [It] [sig-network] Services should fallback to local terminating endpoints when there are no ready endpoints with externalTrafficPolicy=Local [Feature:ProxyTerminatingEndpoints]
Kubernetes e2e suite [It] [sig-network] Services should fallback to local terminating endpoints when there are no ready endpoints with internalTrafficPolicy=Local [Feature:ProxyTerminatingEndpoints]
Kubernetes e2e suite [It] [sig-network] Services should fallback to terminating endpoints when there are no ready endpoints with externallTrafficPolicy=Cluster [Feature:ProxyTerminatingEndpoints]
Kubernetes e2e suite [It] [sig-network] Services should fallback to terminating endpoints when there are no ready endpoints with internalTrafficPolicy=Cluster [Feature:ProxyTerminatingEndpoints]
Kubernetes e2e suite [It] [sig-network] Services should respect internalTrafficPolicy=Local Pod (hostNetwork: true) to Pod [Feature:ServiceInternalTrafficPolicy]
Kubernetes e2e suite [It] [sig-network] Services should respect internalTrafficPolicy=Local Pod and Node, to Pod (hostNetwork: true) [Feature:ServiceInternalTrafficPolicy]
Kubernetes e2e suite [It] [sig-network] Services should respect internalTrafficPolicy=Local Pod to Pod [Feature:ServiceInternalTrafficPolicy]
Kubernetes e2e suite [It] [sig-network] Services should work after restarting apiserver [Disruptive]
Kubernetes e2e suite [It] [sig-network] Services should work after restarting kube-proxy [Disruptive]
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should be able to handle large requests: http
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should be able to handle large requests: udp
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should function for client IP based session affinity: http [LinuxOnly]
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should function for client IP based session affinity: udp [LinuxOnly]
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should function for endpoint-Service: http
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should function for endpoint-Service: udp
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should function for node-Service: http
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should function for node-Service: udp
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should function for pod-Service: http
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should function for pod-Service: sctp [Feature:SCTPConnectivity]
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should function for pod-Service: udp
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should function for service endpoints using hostNetwork
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should update endpoints: http
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should update endpoints: udp
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] should be able to reach pod on ipv4 and ipv6 ip
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] should create a single stack service with cluster ip from primary service range
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] should create pod, add ipv6 and ipv4 ip to pod ips
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] should create service with ipv4 cluster ip
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] should create service with ipv4,v6 cluster ip
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] should create service with ipv6 cluster ip
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] should create service with ipv6,v4 cluster ip
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] should have ipv4 and ipv6 internal node ip
Kubernetes e2e suite [It] [sig-network] [Feature:PerformanceDNS][Serial] Should answer DNS query for maximum number of services per cluster
Kubernetes e2e suite [It] [sig-network] [Feature:Topology Hints] should distribute endpoints evenly
Kubernetes e2e suite [It] [sig-network] kube-proxy migration [Feature:KubeProxyDaemonSetMigration] Downgrade kube-proxy from a DaemonSet to static pods should maintain a functioning cluster [Feature:KubeProxyDaemonSetDowngrade]
Kubernetes e2e suite [It] [sig-network] kube-proxy migration [Feature:KubeProxyDaemonSetMigration] Upgrade kube-proxy from static pods to a DaemonSet should maintain a functioning cluster [Feature:KubeProxyDaemonSetUpgrade]
Kubernetes e2e suite [It] [sig-node] AppArmor load AppArmor profiles can disable an AppArmor profile, using unconfined
Kubernetes e2e suite [It] [sig-node] AppArmor load AppArmor profiles should enforce an AppArmor profile
Kubernetes e2e suite [It] [sig-node] DRA [Feature:DynamicResourceAllocation] cluster with delayed allocation supports external claim referenced by multiple containers of multiple pods
Kubernetes e2e suite [It] [sig-node] DRA [Feature:DynamicResourceAllocation] cluster with delayed allocation supports external claim referenced by multiple pods
Kubernetes e2e suite [It] [sig-node] DRA [Feature:DynamicResourceAllocation] cluster with delayed allocation supports init containers
Kubernetes e2e suite [It] [sig-node] DRA [Feature:DynamicResourceAllocation] cluster with delayed allocation supports inline claim referenced by multiple containers
Kubernetes e2e suite [It] [sig-node] DRA [Feature:DynamicResourceAllocation] cluster with delayed allocation supports simple pod referencing external resource claim
Kubernetes e2e suite [It] [sig-node] DRA [Feature:DynamicResourceAllocation] cluster with delayed allocation supports simple pod referencing inline resource claim
Kubernetes e2e suite [It] [sig-node] DRA [Feature:DynamicResourceAllocation] cluster with immediate allocation supports external claim referenced by multiple containers of multiple pods
Kubernetes e2e suite [It] [sig-node] DRA [Feature:DynamicResourceAllocation] cluster with immediate allocation supports external claim referenced by multiple pods
Kubernetes e2e suite [It] [sig-node] DRA [Feature:DynamicResourceAllocation] cluster with immediate allocation supports init containers
Kubernetes e2e suite [It] [sig-node] DRA [Feature:DynamicResourceAllocation] cluster with immediate allocation supports inline claim referenced by multiple containers
Kubernetes e2e suite [It] [sig-node] DRA [Feature:DynamicResourceAllocation] cluster with immediate allocation supports simple pod referencing external resource claim
Kubernetes e2e suite [It] [sig-node] DRA [Feature:DynamicResourceAllocation] cluster with immediate allocation supports simple pod referencing inline resource claim
Kubernetes e2e suite [It] [sig-node] DRA [Feature:DynamicResourceAllocation] driver supports claim and class parameters
Kubernetes e2e suite [It] [sig-node] DRA [Feature:DynamicResourceAllocation] kubelet must not run a pod if a claim is not reserved for it
Kubernetes e2e suite [It] [sig-node] DRA [Feature:DynamicResourceAllocation] kubelet must retry NodePrepareResource
Kubernetes e2e suite [It] [sig-node] DRA [Feature:DynamicResourceAllocation] kubelet must unprepare resources for force-deleted pod
Kubernetes e2e suite [It] [sig-node] DRA [Feature:DynamicResourceAllocation] kubelet registers plugin
Kubernetes e2e suite [It] [sig-node] DRA [Feature:DynamicResourceAllocation] multiple drivers work
Kubernetes e2e suite [It] [sig-node] DRA [Feature:DynamicResourceAllocation] multiple nodes reallocation works
Kubernetes e2e suite [It] [sig-node] DRA [Feature:DynamicResourceAllocation] multiple nodes with network-attached resources schedules onto different nodes
Kubernetes e2e suite [It] [sig-node] DRA [Feature:DynamicResourceAllocation] multiple nodes with node-local resources with delayed allocation uses all resources
Kubernetes e2e suite [It] [sig-node] DRA [Feature:DynamicResourceAllocation] multiple nodes with node-local resources with immediate allocation uses all resources
Kubernetes e2e suite [It] [sig-node] Downward API [Serial] [Disruptive] [NodeFeature:DownwardAPIHugePages] Downward API tests for hugepages should provide container's limits.hugepages-<pagesize> and requests.hugepages-<pagesize> as env vars
Kubernetes e2e suite [It] [sig-node] Downward API [Serial] [Disruptive] [NodeFeature:DownwardAPIHugePages] Downward API tests for hugepages should provide default limits.hugepages-<pagesize> from node allocatable
Kubernetes e2e suite [It] [sig-node] Kubelet [Serial] [Slow] experimental resource usage tracking [Feature:ExperimentalResourceUsageTracking] resource tracking for 100 pods per node
Kubernetes e2e suite [It] [sig-node] Kubelet [Serial] [Slow] regular resource usage tracking [Feature:RegularResourceUsageTracking] resource tracking for 0 pods per node
Kubernetes e2e suite [It] [sig-node] Kubelet [Serial] [Slow] regular resource usage tracking [Feature:RegularResourceUsageTracking] resource tracking for 100 pods per node
Kubernetes e2e suite [It] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]
Kubernetes e2e suite [It] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] only evicts pods without tolerations from tainted nodes
Kubernetes e2e suite [It] [sig-node] NoExecuteTaintManager Single Pod [Serial] doesn't evict pod with tolerations from tainted nodes
Kubernetes e2e suite [It] [sig-node] NoExecuteTaintManager Single Pod [Serial] eventually evict pod with finite tolerations from tainted nodes
Kubernetes e2e suite [It] [sig-node] NoExecuteTaintManager Single Pod [Serial] evicts pods from tainted nodes
Kubernetes e2e suite [It] [sig-node] NoExecuteTaintManager Single Pod [Serial] pods evicted from tainted nodes have pod disruption condition
Kubernetes e2e suite [It] [sig-node] NoExecuteTaintManager Single Pod [Serial] removing taint cancels eviction [Disruptive] [Conformance]
Kubernetes e2e suite [It] [sig-node] NodeProblemDetector should run without error
Kubernetes e2e suite [It] [sig-node] Pod garbage collector [Feature:PodGarbageCollector] [Slow] should handle the creation of 1000 pods
Kubernetes e2e suite [It] [sig-node] Pods should cap back-off at MaxContainerBackOff [Slow][NodeConformance]
Kubernetes e2e suite [It] [sig-node] Pods should have their auto-restart back-off timer reset on image update [Slow][NodeConformance]
Kubernetes e2e suite [It] [sig-node] Probing container should override timeoutGracePeriodSeconds when LivenessProbe field is set [Feature:ProbeTerminationGracePeriod]
Kubernetes e2e suite [It] [sig-node] Probing container should override timeoutGracePeriodSeconds when StartupProbe field is set [Feature:ProbeTerminationGracePeriod]
Kubernetes e2e suite [It] [sig-node] RuntimeClass should run a Pod requesting a RuntimeClass with scheduling with taints [Serial]
Kubernetes e2e suite [It] [sig-node] Security Context When creating a pod with HostUsers must create the user namespace if set to false [LinuxOnly] [Feature:UserNamespacesStatelessPodsSupport]
Kubernetes e2e suite [It] [sig-node] Security Context When creating a pod with HostUsers must not create the user namespace if set to true [LinuxOnly] [Feature:UserNamespacesStatelessPodsSupport]
Kubernetes e2e suite [It] [sig-node] Security Context When creating a pod with HostUsers should mount all volumes with proper permissions with hostUsers=false [LinuxOnly] [Feature:UserNamespacesStatelessPodsSupport]
Kubernetes e2e suite [It] [sig-node] Security Context When creating a pod with HostUsers should set FSGroup to user inside the container with hostUsers=false [LinuxOnly] [Feature:UserNamespacesStatelessPodsSupport]
Kubernetes e2e suite [It] [sig-node] Security Context should support volume SELinux relabeling [Flaky] [LinuxOnly]
Kubernetes e2e suite [It] [sig-node] Security Context should support volume SELinux relabeling when using hostIPC [Flaky] [LinuxOnly]
Kubernetes e2e suite [It] [sig-node] Security Context should support volume SELinux relabeling when using hostPID [Flaky] [LinuxOnly]
Kubernetes e2e suite [It] [sig-node] Variable Expansion should fail substituting values in a volume subpath with absolute path [Slow] [Conformance]
Kubernetes e2e suite [It] [sig-node] Variable Expansion should fail substituting values in a volume subpath with backticks [Slow] [Conformance]
Kubernetes e2e suite [It] [sig-node] Variable Expansion should succeed in writing subpaths in container [Slow] [Conformance]
Kubernetes e2e suite [It] [sig-node] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance]
Kubernetes e2e suite [It] [sig-node] [Feature:Example] Downward API should create a pod that prints his name and namespace
Kubernetes e2e suite [It] [sig-node] [Feature:Example] Liveness liveness pods should be automatically restarted
Kubernetes e2e suite [It] [sig-node] [Feature:Example] Secret should create a pod that reads a secret
Kubernetes e2e suite [It] [sig-node] crictl should be able to run crictl on the node
Kubernetes e2e suite [It] [sig-node] gpu Upgrade [Feature:GPUUpgrade] cluster downgrade should be able to run gpu pod after downgrade [Feature:GPUClusterDowngrade]
Kubernetes e2e suite [It] [sig-node] gpu Upgrade [Feature:GPUUpgrade] cluster upgrade should be able to run gpu pod after upgrade [Feature:GPUClusterUpgrade]
Kubernetes e2e suite [It] [sig-node] gpu Upgrade [Feature:GPUUpgrade] master upgrade should NOT disrupt gpu pod [Feature:GPUMasterUpgrade]
Kubernetes e2e suite [It] [sig-node] kubelet host cleanup with volume mounts [HostCleanup][Flaky] Host cleanup after disrupting NFS volume [NFS] after stopping the nfs-server and deleting the (active) client pod, the NFS mount and the pod's UID directory should be removed.
Kubernetes e2e suite [It] [sig-node] kubelet host cleanup with volume mounts [HostCleanup][Flaky] Host cleanup after disrupting NFS volume [NFS] after stopping the nfs-server and deleting the (sleeping) client pod, the NFS mount and the pod's UID directory should be removed.
Kubernetes e2e suite [It] [sig-scheduling] GPUDevicePluginAcrossRecreate [Feature:Recreate] run Nvidia GPU Device Plugin tests with a recreation
Kubernetes e2e suite [It] [sig-scheduling] Multi-AZ Clusters should spread the pods of a replication controller across zones [Serial]
Kubernetes e2e suite [It] [sig-scheduling] Multi-AZ Clusters should spread the pods of a service across zones [Serial]
Kubernetes e2e suite [It] [sig-scheduling] SchedulerPredicates [Serial] PodTopologySpread Filtering validates 4 pods with MaxSkew=1 are evenly distributed into 2 nodes
Kubernetes e2e suite [It] [sig-scheduling] SchedulerPredicates [Serial] validates Pods with non-empty schedulingGates are blocked on scheduling [Feature:PodSchedulingReadiness] [alpha]
Kubernetes e2e suite [It] [sig-scheduling] SchedulerPredicates [Serial] validates local ephemeral storage resource limits of pods that are allowed to run [Feature:LocalStorageCapacityIsolation]
Kubernetes e2e suite [It] [sig-scheduling] SchedulerPredicates [Serial] validates pod overhead is considered along with resource limits of pods that are allowed to run verify pod overhead is accounted for
Kubernetes e2e suite [It] [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]
Kubernetes e2e suite [It] [sig-scheduling] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching
Kubernetes e2e suite [It] [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]
Kubernetes e2e suite [It] [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]
Kubernetes e2e suite [It] [sig-scheduling] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching
Kubernetes e2e suite [It] [sig-scheduling] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching
Kubernetes e2e suite [It] [sig-scheduling] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching
Kubernetes e2e suite [It] [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
Kubernetes e2e suite [It] [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol
Kubernetes e2e suite [It] [sig-scheduling] SchedulerPreemption [Serial] PodTopologySpread Preemption validates proper pods are preempted
Kubernetes e2e suite [It] [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance]
Kubernetes e2e suite [It] [sig-scheduling] SchedulerPreemption [Serial] PriorityClass endpoints verify PriorityClass endpoints can be operated with different HTTP methods [Conformance]
Kubernetes e2e suite [It] [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance]
Kubernetes e2e suite [It] [sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]
Kubernetes e2e suite [It] [sig-scheduling] SchedulerPreemption [Serial] validates pod disruption condition is added to the preempted pod
Kubernetes e2e suite [It] [sig-scheduling] SchedulerPriorities [Serial] Pod should be preferably scheduled to nodes pod can tolerate
Kubernetes e2e suite [It] [sig-scheduling] SchedulerPriorities [Serial] Pod should be scheduled to node that don't match the PodAntiAffinity terms
Kubernetes e2e suite [It] [sig-scheduling] SchedulerPriorities [Serial] PodTopologySpread Scoring validates pod should be preferably scheduled to node which makes the matching pods more evenly distributed
Kubernetes e2e suite [It] [sig-scheduling] [Feature:GPUDevicePlugin] run Nvidia GPU Device Plugin tests
Kubernetes e2e suite [It] [sig-storage] CSI Mock selinux on mount SELinuxMount [LinuxOnly][Feature:SELinux][Feature:SELinuxMountReadWriteOncePod] should add SELinux mount option to existing mount options
Kubernetes e2e suite [It] [sig-storage] CSI Mock selinux on mount SELinuxMount [LinuxOnly][Feature:SELinux][Feature:SELinuxMountReadWriteOncePod] should not pass SELinux mount option for CSI driver that does not support SELinux mount
Kubernetes e2e suite [It] [sig-storage] CSI Mock selinux on mount SELinuxMount [LinuxOnly][Feature:SELinux][Feature:SELinuxMountReadWriteOncePod] should not pass SELinux mount option for Pod without SELinux context
Kubernetes e2e suite [It] [sig-storage] CSI Mock selinux on mount SELinuxMount [LinuxOnly][Feature:SELinux][Feature:SELinuxMountReadWriteOncePod] should not pass SELinux mount option for RWO volume
Kubernetes e2e suite [It] [sig-storage] CSI Mock selinux on mount SELinuxMount [LinuxOnly][Feature:SELinux][Feature:SELinuxMountReadWriteOncePod] should not unstage volume when starting a second pod with the same SELinux context
Kubernetes e2e suite [It] [sig-storage] CSI Mock selinux on mount SELinuxMount [LinuxOnly][Feature:SELinux][Feature:SELinuxMountReadWriteOncePod] should pass SELinux mount option for RWOP volume and Pod with SELinux context set
Kubernetes e2e suite [It] [sig-storage] CSI Mock selinux on mount SELinuxMount [LinuxOnly][Feature:SELinux][Feature:SELinuxMountReadWriteOncePod] should unstage volume when starting a second pod with different SELinux context
Kubernetes e2e suite [It] [sig-storage] CSI Mock volume attach CSI CSIDriver deployment after pod creation using non-attachable mock driver should bringup pod after deploying CSIDriver attach=false [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Mock volume expansion Expansion with recovery[Feature:RecoverVolumeExpansionFailure] recovery should not be possible in partially expanded volumes
Kubernetes e2e suite [It] [sig-storage] CSI Mock volume expansion Expansion with recovery[Feature:RecoverVolumeExpansionFailure] should allow recovery if controller expansion fails with final error
Kubernetes e2e suite [It] [sig-storage] CSI Mock volume expansion Expansion with recovery[Feature:RecoverVolumeExpansionFailure] should record target size in allocated resources
Kubernetes e2e suite [It] [sig-storage] CSI Mock volume limit CSI volume limit information using mock driver should report attach limit for generic ephemeral volume when persistent volume is attached [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Mock volume limit CSI volume limit information using mock driver should report attach limit for persistent volume when generic ephemeral volume is attached [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Mock volume limit CSI volume limit information using mock driver should report attach limit when limit is bigger than 0 [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Mock volume node stage CSI NodeStage error cases [Slow] should call NodeUnstage after NodeStage ephemeral error
Kubernetes e2e suite [It] [sig-storage] CSI Mock volume node stage CSI NodeStage error cases [Slow] should call NodeUnstage after NodeStage success
Kubernetes e2e suite [It] [sig-storage] CSI Mock volume node stage CSI NodeStage error cases [Slow] should not call NodeUnstage after NodeStage final error
Kubernetes e2e suite [It] [sig-storage] CSI Mock volume node stage CSI NodeStage error cases [Slow] should retry NodeStage after NodeStage ephemeral error
Kubernetes e2e suite [It] [sig-storage] CSI Mock volume node stage CSI NodeStage error cases [Slow] should retry NodeStage after NodeStage final error
Kubernetes e2e suite [It] [sig-storage] CSI Mock volume node stage CSI NodeUnstage error cases [Slow] should call NodeStage after NodeUnstage success
Kubernetes e2e suite [It] [sig-storage] CSI Mock volume node stage CSI NodeUnstage error cases [Slow] two pods: should call NodeStage after previous NodeUnstage final error
Kubernetes e2e suite [It] [sig-storage] CSI Mock volume node stage CSI NodeUnstage error cases [Slow] two pods: should call NodeStage after previous NodeUnstage transient error
Kubernetes e2e suite [It] [sig-storage] CSI Mock volume snapshot CSI Snapshot Controller metrics [Feature:VolumeSnapshotDataSource] snapshot controller should emit dynamic CreateSnapshot, CreateSnapshotAndReady, and DeleteSnapshot metrics
Kubernetes e2e suite [It] [sig-storage] CSI Mock volume snapshot CSI Snapshot Controller metrics [Feature:VolumeSnapshotDataSource] snapshot controller should emit pre-provisioned CreateSnapshot, CreateSnapshotAndReady, and DeleteSnapshot metrics
Kubernetes e2e suite [It] [sig-storage] CSI Mock volume snapshot CSI Volume Snapshots [Feature:VolumeSnapshotDataSource] volumesnapshotcontent and pvc in Bound state with deletion timestamp set should not get deleted while snapshot finalizer exists
Kubernetes e2e suite [It] [sig-storage] CSI Mock volume snapshot CSI Volume Snapshots secrets [Feature:VolumeSnapshotDataSource] volume snapshot create/delete with secrets
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should support expansion of pvcs created for ephemeral pvcs
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] provisioning should mount multiple PV pointing to the same storage on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with any volume data source [Serial]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with mount options
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with pvc data source in parallel [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] volume-stress multiple pods should access different volumes repeatedly [Slow] [Serial]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] capacity provides storage capacity information
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with different fsgroup applied to the volume contents
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with same fsgroup applied to the volume contents
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with different fsgroup applied to the volume contents
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with same fsgroup skips ownership changes to the volume contents
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with any volume data source [Serial]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with mount options
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source in parallel [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] read-write-once-pod[Feature:ReadWriteOncePod] should block a second pod from using an in-use ReadWriteOncePod volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] read-write-once-pod[Feature:ReadWriteOncePod] should block a second pod from using an in-use ReadWriteOncePod volume on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] volume-stress multiple pods should access different volumes repeatedly [Slow] [Serial]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (delayed binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (delayed binding)] topology should provision a volume and schedule a pod with AllowedTopologies
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ext3)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ext4)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down is usable by a new pod when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down is usable by a new pod with a different SELinux context when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down is usable by a new pod when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down is usable by a new pod with a different SELinux context when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] volume-lifecycle-performance should provision volumes at scale within performance constraints [Slow] [Serial]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits should support volume limits [Serial]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (immediate binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (immediate binding)] topology should provision a volume and schedule a pod with AllowedTopologies
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should mount multiple PV pointing to the same storage on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with any volume data source [Serial]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with mount options
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with pvc data source
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with pvc data source in parallel [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support existing directory
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support non-existent path
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (xfs)][Slow] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic Snapshot (delete policy)] snapshottable-stress[Feature:VolumeSnapshotDataSource] should support snapshotting of many volumes repeatedly [Slow] [Serial]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works, check deletion (ephemeral)
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic Snapshot (retain policy)] snapshottable-stress[Feature:VolumeSnapshotDataSource] should support snapshotting of many volumes repeatedly [Slow] [Serial]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works, check deletion (ephemeral)
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Ephemeral Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Ephemeral Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works, check deletion (ephemeral)
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Ephemeral Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Ephemeral Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works, check deletion (ephemeral)
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should create read-only inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should support multiple inline ephemeral volumes
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs)] volumeLimits should support volume limits [Serial]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should support existing directory
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (ext3)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (ext4)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (xfs)][Slow] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (ext3)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (ext4)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down is usable by a new pod when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down is usable by a new pod with a different SELinux context when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down is usable by a new pod when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down is usable by a new pod with a different SELinux context when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (xfs)][Slow] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works, check deletion (ephemeral)
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works, check deletion (ephemeral)
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should create read-only inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should create read/write inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should support expansion of pvcs created for ephemeral pvcs
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should support multiple inline ephemeral volumes
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should support two pods which have the same volume definition
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] provisioning should mount multiple PV pointing to the same storage on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with any volume data source [Serial]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with mount options
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with pvc data source
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with pvc data source in parallel [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] volume-stress multiple pods should access different volumes repeatedly [Slow] [Serial]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] capacity provides storage capacity information
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with different fsgroup applied to the volume contents
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with same fsgroup applied to the volume contents
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with different fsgroup applied to the volume contents
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with same fsgroup skips ownership changes to the volume contents
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] provisioning should mount multiple PV pointing to the same storage on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with any volume data source [Serial]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with mount options
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source in parallel [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] read-write-once-pod[Feature:ReadWriteOncePod] should block a second pod from using an in-use ReadWriteOncePod volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] read-write-once-pod[Feature:ReadWriteOncePod] should block a second pod from using an in-use ReadWriteOncePod volume on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] volume-stress multiple pods should access different volumes repeatedly [Slow] [Serial]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (delayed binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (delayed binding)] topology should provision a volume and schedule a pod with AllowedTopologies
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ext3)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ext4)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down is usable by a new pod when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down is usable by a new pod with a different SELinux context when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down is usable by a new pod when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down is usable by a new pod with a different SELinux context when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (filesystem volmode)] volume-lifecycle-performance should provision volumes at scale within performance constraints [Slow] [Serial]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits should support volume limits [Serial]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits should verify that all csinodes have volume limits
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (immediate binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (immediate binding)] topology should provision a volume and schedule a pod with AllowedTopologies
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should mount multiple PV pointing to the same storage on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with any volume data source [Serial]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with mount options
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with pvc data source
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with pvc data source in parallel [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support existing directory
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support non-existent path
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (xfs)][Slow] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic Snapshot (delete policy)] snapshottable-stress[Feature:VolumeSnapshotDataSource] should support snapshotting of many volumes repeatedly [Slow] [Serial]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works, check deletion (ephemeral)
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic Snapshot (retain policy)] snapshottable-stress[Feature:VolumeSnapshotDataSource] should support snapshotting of many volumes repeatedly [Slow] [Serial]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works, check deletion (ephemeral)
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Ephemeral Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Ephemeral Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works, check deletion (ephemeral)
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Ephemeral Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Ephemeral Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works, check deletion (ephemeral)
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should create read-only inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should create read/write inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should support expansion of pvcs created for ephemeral pvcs
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should support multiple inline ephemeral volumes
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should support two pods which have the same volume definition
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should create read-only inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should create read/write inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should support expansion of pvcs created for ephemeral pvcs
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should support multiple inline ephemeral volumes
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should support two pods which have the same volume definition
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should create read-only inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should create read/write inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support expansion of pvcs created for ephemeral pvcs
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support multiple inline ephemeral volumes
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support two pods which have the same volume definition
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Generic Ephemeral-volume (default fs)] volumeLimits should support volume limits [Serial]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Generic Ephemeral-volume (default fs)] volumeLimits should verify that all csinodes have volume limits
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] subPath should support existing directory
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (ext3)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (ext4)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (xfs)][Slow] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (block volmode)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (ext3)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (ext4)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down is usable by a new pod when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down is usable by a new pod with a different SELinux context when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down is usable by a new pod when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down is usable by a new pod with a different SELinux context when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (xfs)][Slow] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works, check deletion (ephemeral)
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works, check deletion (ephemeral)
Kubernetes e2e suite [It] [sig-storage] ConfigMap Should fail non-optional pod creation due to configMap object does not exist [Slow]
Kubernetes e2e suite [It] [sig-storage] ConfigMap Should fail non-optional pod creation due to the key in the configMap object does not exist [Slow]
Kubernetes e2e suite [It] [sig-storage] Downward API [Serial] [Disruptive] [Feature:EphemeralStorage] Downward API tests for local ephemeral storage should provide container's limits.ephemeral-storage and requests.ephemeral-storage as env vars
Kubernetes e2e suite [It] [sig-storage] Downward API [Serial] [Disruptive] [Feature:EphemeralStorage] Downward API tests for local ephemeral storage should provide default limits.ephemeral-storage from node allocatable
Kubernetes e2e suite [It] [sig-storage] Dynamic Provisioning DynamicProvisioner Default should be disabled by changing the default annotation [Serial] [Disruptive]
Kubernetes e2e suite [It] [sig-storage] Dynamic Provisioning DynamicProvisioner Default should be disabled by removing the default annotation [Serial] [Disruptive]
Kubernetes e2e suite [It] [sig-storage] Dynamic Provisioning DynamicProvisioner Default should create and delete default persistent volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] Dynamic Provisioning DynamicProvisioner External should let an external dynamic provisioner create and delete persistent volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] Dynamic Provisioning DynamicProvisioner [Slow] [Feature:StorageProvider] deletion should be idempotent
Kubernetes e2e suite [It] [sig-storage] Dynamic Provisioning DynamicProvisioner [Slow] [Feature:StorageProvider] should provision storage with different parameters
Kubernetes e2e suite [It] [sig-storage] Dynamic Provisioning DynamicProvisioner [Slow] [Feature:StorageProvider] should provision storage with non-default reclaim policy Retain
Kubernetes e2e suite [It] [sig-storage] Dynamic Provisioning DynamicProvisioner [Slow] [Feature:StorageProvider] should test that deleting a claim before the volume is provisioned deletes the volume.
Kubernetes e2e suite [It] [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]
Kubernetes e2e suite [It] [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for git_repo [Serial] [Slow]
Kubernetes e2e suite [It] [sig-storage] Flexvolumes should be mountable when attachable [Feature:Flexvolumes]
Kubernetes e2e suite [It] [sig-storage] Flexvolumes should be mountable when non-attachable
Kubernetes e2e suite [It] [sig-storage] GKE local SSD [Feature:GKELocalSSD] should write and read from node local SSD [Feature:GKELocalSSD]
Kubernetes e2e suite [It] [sig-storage] GenericPersistentVolume[Disruptive] When kubelet restarts Should test that a file written to the mount before kubelet restart is readable after restart.
Kubernetes e2e suite [It] [sig-storage] GenericPersistentVolume[Disruptive] When kubelet restarts Should test that a volume mounted to a pod that is deleted while the kubelet is down unmounts when the kubelet returns.
Kubernetes e2e suite [It] [sig-storage] GenericPersistentVolume[Disruptive] When kubelet restarts Should test that a volume mounted to a pod that is force deleted while the kubelet is down unmounts when the kubelet returns.
Kubernetes e2e suite [It] [sig-storage] HostPathType Block Device [Slow] Should be able to mount block device 'ablkdev' successfully when HostPathType is HostPathBlockDev
Kubernetes e2e suite [It] [sig-storage] HostPathType Block Device [Slow] Should be able to mount block device 'ablkdev' successfully when HostPathType is HostPathUnset
Kubernetes e2e suite [It] [sig-storage] HostPathType Block Device [Slow] Should fail on mounting block device 'ablkdev' when HostPathType is HostPathCharDev
Kubernetes e2e suite [It] [sig-storage] HostPathType Block Device [Slow] Should fail on mounting block device 'ablkdev' when HostPathType is HostPathDirectory
Kubernetes e2e suite [It] [sig-storage] HostPathType Block Device [Slow] Should fail on mounting block device 'ablkdev' when HostPathType is HostPathFile
Kubernetes e2e suite [It] [sig-storage] HostPathType Block Device [Slow] Should fail on mounting block device 'ablkdev' when HostPathType is HostPathSocket
Kubernetes e2e suite [It] [sig-storage] HostPathType Block Device [Slow] Should fail on mounting non-existent block device 'does-not-exist-blk-dev' when HostPathType is HostPathBlockDev
Kubernetes e2e suite [It] [sig-storage] HostPathType Character Device [Slow] Should be able to mount character device 'achardev' successfully when HostPathType is HostPathCharDev
Kubernetes e2e suite [It] [sig-storage] HostPathType Character Device [Slow] Should be able to mount character device 'achardev' successfully when HostPathType is HostPathUnset
Kubernetes e2e suite [It] [sig-storage] HostPathType Character Device [Slow] Should fail on mounting character device 'achardev' when HostPathType is HostPathBlockDev
Kubernetes e2e suite [It] [sig-storage] HostPathType Character Device [Slow] Should fail on mounting character device 'achardev' when HostPathType is HostPathDirectory
Kubernetes e2e suite [It] [sig-storage] HostPathType Character Device [Slow] Should fail on mounting character device 'achardev' when HostPathType is HostPathFile
Kubernetes e2e suite [It] [sig-storage] HostPathType Character Device [Slow] Should fail on mounting character device 'achardev' when HostPathType is HostPathSocket
Kubernetes e2e suite [It] [sig-storage] HostPathType Character Device [Slow] Should fail on mounting non-existent character device 'does-not-exist-char-dev' when HostPathType is HostPathCharDev
Kubernetes e2e suite [It] [sig-storage] HostPathType Directory [Slow] Should be able to mount directory 'adir' successfully when HostPathType is HostPathDirectory
Kubernetes e2e suite [It] [sig-storage] HostPathType Directory [Slow] Should be able to mount directory 'adir' successfully when HostPathType is HostPathUnset
Kubernetes e2e suite [It] [sig-storage] HostPathType Directory [Slow] Should fail on mounting directory 'adir' when HostPathType is HostPathBlockDev
Kubernetes e2e suite [It] [sig-storage] HostPathType Directory [Slow] Should fail on mounting directory 'adir' when HostPathType is HostPathCharDev
Kubernetes e2e suite [It] [sig-storage] HostPathType Directory [Slow] Should fail on mounting directory 'adir' when HostPathType is HostPathFile
Kubernetes e2e suite [It] [sig-storage] HostPathType Directory [Slow] Should fail on mounting directory 'adir' when HostPathType is HostPathSocket
Kubernetes e2e suite [It] [sig-storage] HostPathType Directory [Slow] Should fail on mounting non-existent directory 'does-not-exist-dir' when HostPathType is HostPathDirectory
Kubernetes e2e suite [It] [sig-storage] HostPathType File [Slow] Should be able to mount file 'afile' successfully when HostPathType is HostPathFile
Kubernetes e2e suite [It] [sig-storage] HostPathType File [Slow] Should be able to mount file 'afile' successfully when HostPathType is HostPathUnset
Kubernetes e2e suite [It] [sig-storage] HostPathType File [Slow] Should fail on mounting file 'afile' when HostPathType is HostPathBlockDev
Kubernetes e2e suite [It] [sig-storage] HostPathType File [Slow] Should fail on mounting file 'afile' when HostPathType is HostPathCharDev
Kubernetes e2e suite [It] [sig-storage] HostPathType File [Slow] Should fail on mounting file 'afile' when HostPathType is HostPathDirectory
Kubernetes e2e suite [It] [sig-storage] HostPathType File [Slow] Should fail on mounting file 'afile' when HostPathType is HostPathSocket
Kubernetes e2e suite [It] [sig-storage] HostPathType File [Slow] Should fail on mounting non-existent file 'does-not-exist-file' when HostPathType is HostPathFile
Kubernetes e2e suite [It] [sig-storage] HostPathType Socket [Slow] Should be able to mount socket 'asocket' successfully when HostPathType is HostPathSocket
Kubernetes e2e suite [It] [sig-storage] HostPathType Socket [Slow] Should be able to mount socket 'asocket' successfully when HostPathType is HostPathUnset
Kubernetes e2e suite [It] [sig-storage] HostPathType Socket [Slow] Should fail on mounting non-existent socket 'does-not-exist-socket' when HostPathType is HostPathSocket
Kubernetes e2e suite [It] [sig-storage] HostPathType Socket [Slow] Should fail on mounting socket 'asocket' when HostPathType is HostPathBlockDev
Kubernetes e2e suite [It] [sig-storage] HostPathType Socket [Slow] Should fail on mounting socket 'asocket' when HostPathType is HostPathCharDev
Kubernetes e2e suite [It] [sig-storage] HostPathType Socket [Slow] Should fail on mounting socket 'asocket' when HostPathType is HostPathDirectory
Kubernetes e2e suite [It] [sig-storage] HostPathType Socket [Slow] Should fail on mounting socket 'asocket' when HostPathType is HostPathFile
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] provisioning should mount multiple PV pointing to the same storage on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with any volume data source [Serial]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with mount options
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with pvc data source
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with pvc data source in parallel [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] volume-stress multiple pods should access different volumes repeatedly [Slow] [Serial]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] capacity provides storage capacity information
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with different fsgroup applied to the volume contents
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with same fsgroup applied to the volume contents
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with different fsgroup applied to the volume contents
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with same fsgroup skips ownership changes to the volume contents
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] provisioning should mount multiple PV pointing to the same storage on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with any volume data source [Serial]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with mount options
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source in parallel [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] volume-stress multiple pods should access different volumes repeatedly [Slow] [Serial]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (delayed binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (delayed binding)] topology should provision a volume and schedule a pod with AllowedTopologies
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ext3)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ext4)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down is usable by a new pod when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down is usable by a new pod with a different SELinux context when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down is usable by a new pod when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down is usable by a new pod with a different SELinux context when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits should support volume limits [Serial]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits should verify that all csinodes have volume limits
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (immediate binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (immediate binding)] topology should provision a volume and schedule a pod with AllowedTopologies
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should mount multiple PV pointing to the same storage on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with any volume data source [Serial]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with mount options
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with pvc data source
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with pvc data source in parallel [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support existing directory
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support non-existent path
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (xfs)][Slow] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should create read-only inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should create read/write inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should support expansion of pvcs created for ephemeral pvcs
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should support multiple inline ephemeral volumes
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should support two pods which have the same volume definition
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should create read-only inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should create read/write inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should support expansion of pvcs created for ephemeral pvcs
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should support multiple inline ephemeral volumes
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should support two pods which have the same volume definition
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should create read-only inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should create read/write inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support expansion of pvcs created for ephemeral pvcs
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support multiple inline ephemeral volumes
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support two pods which have the same volume definition
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Generic Ephemeral-volume (default fs)] volumeLimits should support volume limits [Serial]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Generic Ephemeral-volume (default fs)] volumeLimits should verify that all csinodes have volume limits
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should support existing directory
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (ext3)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (ext4)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (xfs)][Slow] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (ext3)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (ext4)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down is usable by a new pod when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down is usable by a new pod with a different SELinux context when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down is usable by a new pod when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down is usable by a new pod with a different SELinux context when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (xfs)][Slow] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (block volmode)] provisioning should mount multiple PV pointing to the same storage on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with any volume data source [Serial]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with mount options
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with pvc data source
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with pvc data source in parallel [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (block volmode)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (block volmode)] volume-stress multiple pods should access different volumes repeatedly [Slow] [Serial]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (block volmode)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] capacity provides storage capacity information
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with different fsgroup applied to the volume contents
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with same fsgroup applied to the volume contents
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with different fsgroup applied to the volume contents
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with same fsgroup skips ownership changes to the volume contents
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] provisioning should mount multiple PV pointing to the same storage on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with any volume data source [Serial]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with mount options
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source in parallel [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] volume-stress multiple pods should access different volumes repeatedly [Slow] [Serial]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (delayed binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (delayed binding)] topology should provision a volume and schedule a pod with AllowedTopologies
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ext3)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ext4)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down is usable by a new pod when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down is usable by a new pod with a different SELinux context when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down is usable by a new pod when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down is usable by a new pod with a different SELinux context when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits should support volume limits [Serial]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits should verify that all csinodes have volume limits
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (immediate binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (immediate binding)] topology should provision a volume and schedule a pod with AllowedTopologies
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should mount multiple PV pointing to the same storage on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with any volume data source [Serial]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with mount options
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with pvc data source
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with pvc data source in parallel [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support existing directory
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support non-existent path
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (xfs)][Slow] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should create read-only inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should create read/write inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should support expansion of pvcs created for ephemeral pvcs
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should support multiple inline ephemeral volumes
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should support two pods which have the same volume definition
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should create read-only inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should create read/write inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should support expansion of pvcs created for ephemeral pvcs
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should support multiple inline ephemeral volumes
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should support two pods which have the same volume definition
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should create read-only inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should create read/write inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support expansion of pvcs created for ephemeral pvcs
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support multiple inline ephemeral volumes
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support two pods which have the same volume definition
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Generic Ephemeral-volume (default fs)] volumeLimits should support volume limits [Serial]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Generic Ephemeral-volume (default fs)] volumeLimits should verify that all csinodes have volume limits
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (default fs)] subPath should support existing directory
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (default fs)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (ext3)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (ext4)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (xfs)][Slow] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (block volmode)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (ext3)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (ext4)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down is usable by a new pod when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down is usable by a new pod with a different SELinux context when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down is usable by a new pod when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down is usable by a new pod with a different SELinux context when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (xfs)][Slow] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (block volmode)] provisioning should mount multiple PV pointing to the same storage on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with any volume data source [Serial]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with mount options
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with pvc data source
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with pvc data source in parallel [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (block volmode)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (block volmode)] volume-stress multiple pods should access different volumes repeatedly [Slow] [Serial]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (block volmode)] volumeMode should fail in binding dynamic provisioned PV to PVC [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (block volmode)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] capacity provides storage capacity information
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with different fsgroup applied to the volume contents
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with same fsgroup applied to the volume contents
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with different fsgroup applied to the volume contents
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with same fsgroup skips ownership changes to the volume contents
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] provisioning should mount multiple PV pointing to the same storage on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with any volume data source [Serial]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with mount options
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source in parallel [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] volume-stress multiple pods should access different volumes repeatedly [Slow] [Serial]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (delayed binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (delayed binding)] topology should provision a volume and schedule a pod with AllowedTopologies
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ext3)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ext4)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down is usable by a new pod when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down is usable by a new pod with a different SELinux context when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down is usable by a new pod when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down is usable by a new pod with a different SELinux context when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits should support volume limits [Serial]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits should verify that all csinodes have volume limits
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (immediate binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (immediate binding)] topology should provision a volume and schedule a pod with AllowedTopologies
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should mount multiple PV pointing to the same storage on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with any volume data source [Serial]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with mount options
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with pvc data source
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with pvc data source in parallel [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support existing directory
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support non-existent path
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (xfs)][Slow] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should create read-only inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should create read/write inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should support expansion of pvcs created for ephemeral pvcs
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should support multiple inline ephemeral volumes
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should support two pods which have the same volume definition
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should create read-only inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should create read/write inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should support expansion of pvcs created for ephemeral pvcs
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should support multiple inline ephemeral volumes
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should support two pods which have the same volume definition
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should create read-only inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should create read/write inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support expansion of pvcs created for ephemeral pvcs
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support multiple inline ephemeral volumes
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support two pods which have the same volume definition
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Generic Ephemeral-volume (default fs)] volumeLimits should support volume limits [Serial]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Generic Ephemeral-volume (default fs)] volumeLimits should verify that all csinodes have volume limits
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Inline-volume (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Inline-volume (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Inline-volume (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Inline-volume (default fs)] subPath should support existing directory
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Inline-volume (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Inline-volume (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Inline-volume (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Inline-volume (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Inline-volume (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Inline-volume (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Inline-volume (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Inline-volume (default fs)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Inline-volume (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Inline-volume (ext3)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Inline-volume (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Inline-volume (ext4)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Inline-volume (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Inline-volume (xfs)][Slow] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should fail to create pod by failing to mount volume [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (block volmode)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (ext3)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (ext4)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down is usable by a new pod when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down is usable by a new pod with a different SELinux context when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down is usable by a new pod when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down is usable by a new pod with a different SELinux context when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (xfs)][Slow] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (block volmode)] provisioning should mount multiple PV pointing to the same storage on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with any volume data source [Serial]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with mount options
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with pvc data source
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with pvc data source in parallel [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (block volmode)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (block volmode)] volume-stress multiple pods should access different volumes repeatedly [Slow] [Serial]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (block volmode)] volumeMode should fail in binding dynamic provisioned PV to PVC [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (block volmode)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] capacity provides storage capacity information
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with different fsgroup applied to the volume contents
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with same fsgroup applied to the volume contents
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with different fsgroup applied to the volume contents
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with same fsgroup skips ownership changes to the volume contents
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] provisioning should mount multiple PV pointing to the same storage on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with any volume data source [Serial]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with mount options
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source in parallel [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] volume-stress multiple pods should access different volumes repeatedly [Slow] [Serial]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (default fs)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (delayed binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (delayed binding)] topology should provision a volume and schedule a pod with AllowedTopologies
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ext3)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ext4)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down is usable by a new pod when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down is usable by a new pod with a different SELinux context when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down is usable by a new pod when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down is usable by a new pod with a different SELinux context when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits should support volume limits [Serial]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits should verify that all csinodes have volume limits
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (immediate binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (immediate binding)] topology should provision a volume and schedule a pod with AllowedTopologies
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should mount multiple PV pointing to the same storage on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with any volume data source [Serial]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with mount options
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with pvc data source
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with pvc data source in parallel [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support existing directory
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support non-existent path
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Dynamic PV (xfs)][Slow] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should create read-only inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should create read/write inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should support expansion of pvcs created for ephemeral pvcs
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should support multiple inline ephemeral volumes
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should support two pods which have the same volume definition
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should create read-only inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should create read/write inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should support expansion of pvcs created for ephemeral pvcs
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should support multiple inline ephemeral volumes
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should support two pods which have the same volume definition
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should create read-only inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should create read/write inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support expansion of pvcs created for ephemeral pvcs
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support multiple inline ephemeral volumes
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support two pods which have the same volume definition
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Generic Ephemeral-volume (default fs)] volumeLimits should support volume limits [Serial]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Generic Ephemeral-volume (default fs)] volumeLimits should verify that all csinodes have volume limits
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Inline-volume (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Inline-volume (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Inline-volume (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Inline-volume (default fs)] subPath should support existing directory
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Inline-volume (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Inline-volume (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Inline-volume (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Inline-volume (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Inline-volume (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Inline-volume (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Inline-volume (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Inline-volume (default fs)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Inline-volume (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Inline-volume (ext3)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Inline-volume (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Inline-volume (ext4)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Inline-volume (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Inline-volume (xfs)][Slow] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should fail to create pod by failing to mount volume [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (block volmode)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (ext3)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (ext4)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down is usable by a new pod when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down is usable by a new pod with a different SELinux context when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down is usable by a new pod when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down is usable by a new pod with a different SELinux context when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes][Serial] [Testpattern: Pre-provisioned PV (xfs)][Slow] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (block volmode)] provisioning should mount multiple PV pointing to the same storage on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with any volume data source [Serial]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with mount options
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with pvc data source
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with pvc data source in parallel [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (block volmode)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (block volmode)] volume-stress multiple pods should access different volumes repeatedly [Slow] [Serial]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (block volmode)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (default fs)] capacity provides storage capacity information
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with different fsgroup applied to the volume contents
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with same fsgroup applied to the volume contents
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with different fsgroup applied to the volume contents
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with same fsgroup skips ownership changes to the volume contents
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (default fs)] provisioning should mount multiple PV pointing to the same storage on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with any volume data source [Serial]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with mount options
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source in parallel [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (default fs)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (default fs)] volume-stress multiple pods should access different volumes repeatedly [Slow] [Serial]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (default fs)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (delayed binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (delayed binding)] topology should provision a volume and schedule a pod with AllowedTopologies
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (ext3)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (ext4)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down is usable by a new pod when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down is usable by a new pod with a different SELinux context when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down is usable by a new pod when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down is usable by a new pod with a different SELinux context when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits should support volume limits [Serial]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits should verify that all csinodes have volume limits
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (immediate binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (immediate binding)] topology should provision a volume and schedule a pod with AllowedTopologies
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should mount multiple PV pointing to the same storage on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with any volume data source [Serial]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with mount options
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with pvc data source
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with pvc data source in parallel [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support existing directory
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support non-existent path
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (xfs)][Slow] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should create read-only inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should create read/write inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should support expansion of pvcs created for ephemeral pvcs
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should support multiple inline ephemeral volumes
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should support two pods which have the same volume definition
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should create read-only inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should create read/write inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should support expansion of pvcs created for ephemeral pvcs
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should support multiple inline ephemeral volumes
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should support two pods which have the same volume definition
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should create read-only inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should create read/write inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support expansion of pvcs created for ephemeral pvcs
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support multiple inline ephemeral volumes
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support two pods which have the same volume definition
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Generic Ephemeral-volume (default fs)] volumeLimits should support volume limits [Serial]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Generic Ephemeral-volume (default fs)] volumeLimits should verify that all csinodes have volume limits
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Inline-volume (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Inline-volume (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Inline-volume (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Inline-volume (default fs)] subPath should support existing directory
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Inline-volume (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Inline-volume (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Inline-volume (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Inline-volume (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Inline-volume (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Inline-volume (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Inline-volume (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Inline-volume (default fs)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Inline-volume (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Inline-volume (ext3)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Inline-volume (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Inline-volume (ext4)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Inline-volume (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Inline-volume (xfs)][Slow] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (block volmode)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (ext3)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (ext4)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down is usable by a new pod when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down is usable by a new pod with a different SELinux context when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down is usable by a new pod when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down is usable by a new pod with a different SELinux context when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Pre-provisioned PV (xfs)][Slow] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (block volmode)] provisioning should mount multiple PV pointing to the same storage on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with any volume data source [Serial]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with mount options
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with pvc data source
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with pvc data source in parallel [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (block volmode)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (block volmode)] volume-stress multiple pods should access different volumes repeatedly [Slow] [Serial]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (block volmode)] volumeMode should fail in binding dynamic provisioned PV to PVC [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (block volmode)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (default fs)] capacity provides storage capacity information
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with different fsgroup applied to the volume contents
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with same fsgroup applied to the volume contents
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with different fsgroup applied to the volume contents
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with same fsgroup skips ownership changes to the volume contents
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (default fs)] provisioning should mount multiple PV pointing to the same storage on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with any volume data source [Serial]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with mount options
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source in parallel [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (default fs)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (default fs)] volume-stress multiple pods should access different volumes repeatedly [Slow] [Serial]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (default fs)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (delayed binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (delayed binding)] topology should provision a volume and schedule a pod with AllowedTopologies
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (ext3)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (ext4)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down is usable by a new pod when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down is usable by a new pod with a different SELinux context when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down is usable by a new pod when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down is usable by a new pod with a different SELinux context when kubelet returns [Feature:SELinux][Feature:SELinuxMountReadWriteOncePod].
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSn