Result | FAILURE |
Tests | 43 failed / 841 succeeded |
Started | |
Elapsed | 34m29s |
Revision | master |
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\sExternal\sStorage\s\[Driver\:\sebs\.csi\.aws\.com\]\s\[Testpattern\:\sDynamic\sPV\s\(block\svolmode\)\(allowExpansion\)\]\svolume\-expand\sVerify\sif\soffline\sPVC\sexpansion\sworks$'
test/e2e/storage/testsuites/volume_expand.go:222 k8s.io/kubernetes/test/e2e/storage/testsuites.(*volumeExpandTestSuite).DefineTests.func4() test/e2e/storage/testsuites/volume_expand.go:222 +0xa85from junit_01.xml
{"msg":"FAILED External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand Verify if offline PVC expansion works","completed":9,"skipped":37,"failed":1,"failures":["External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand Verify if offline PVC expansion works"]} [BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/21/23 13:16:37.878�[0m Jan 21 13:16:37.878: INFO: >>> kubeConfig: /root/.kube/config �[1mSTEP:�[0m Building a namespace api object, basename volume-expand �[38;5;243m01/21/23 13:16:37.879�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/21/23 13:16:38.285�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/21/23 13:16:38.569�[0m [It] Verify if offline PVC expansion works test/e2e/storage/testsuites/volume_expand.go:176 Jan 21 13:16:38.802: INFO: Creating resource for dynamic PV Jan 21 13:16:38.802: INFO: Using claimSize:1Gi, test suite supported size:{ 1Gi}, driver(ebs.csi.aws.com) supported size:{ 1Gi} �[1mSTEP:�[0m creating a StorageClass volume-expand-7662-e2e-sc5ck8r �[38;5;243m01/21/23 13:16:38.803�[0m �[1mSTEP:�[0m creating a claim �[38;5;243m01/21/23 13:16:38.932�[0m �[1mSTEP:�[0m Creating a pod with dynamically provisioned volume �[38;5;243m01/21/23 13:16:39.164�[0m Jan 21 13:16:39.283: INFO: Waiting up to 5m0s for pod "pod-bd8887bc-aeb5-46c1-9ae4-aa1113864f62" in namespace "volume-expand-7662" to be "running" Jan 21 13:16:39.399: INFO: Pod "pod-bd8887bc-aeb5-46c1-9ae4-aa1113864f62": Phase="Pending", Reason="", readiness=false. Elapsed: 115.887073ms Jan 21 13:16:41.513: INFO: Pod "pod-bd8887bc-aeb5-46c1-9ae4-aa1113864f62": Phase="Pending", Reason="", readiness=false. Elapsed: 2.229679589s Jan 21 13:16:43.515: INFO: Pod "pod-bd8887bc-aeb5-46c1-9ae4-aa1113864f62": Phase="Pending", Reason="", readiness=false. Elapsed: 4.232209843s Jan 21 13:16:45.518: INFO: Pod "pod-bd8887bc-aeb5-46c1-9ae4-aa1113864f62": Phase="Pending", Reason="", readiness=false. Elapsed: 6.234597379s Jan 21 13:16:47.513: INFO: Pod "pod-bd8887bc-aeb5-46c1-9ae4-aa1113864f62": Phase="Pending", Reason="", readiness=false. Elapsed: 8.22993625s Jan 21 13:16:49.514: INFO: Pod "pod-bd8887bc-aeb5-46c1-9ae4-aa1113864f62": Phase="Pending", Reason="", readiness=false. Elapsed: 10.230873185s Jan 21 13:16:51.513: INFO: Pod "pod-bd8887bc-aeb5-46c1-9ae4-aa1113864f62": Phase="Pending", Reason="", readiness=false. Elapsed: 12.229433966s Jan 21 13:16:53.529: INFO: Pod "pod-bd8887bc-aeb5-46c1-9ae4-aa1113864f62": Phase="Pending", Reason="", readiness=false. Elapsed: 14.2458879s Jan 21 13:16:55.512: INFO: Pod "pod-bd8887bc-aeb5-46c1-9ae4-aa1113864f62": Phase="Pending", Reason="", readiness=false. Elapsed: 16.229317984s Jan 21 13:16:57.513: INFO: Pod "pod-bd8887bc-aeb5-46c1-9ae4-aa1113864f62": Phase="Pending", Reason="", readiness=false. Elapsed: 18.229663496s Jan 21 13:16:59.518: INFO: Pod "pod-bd8887bc-aeb5-46c1-9ae4-aa1113864f62": Phase="Running", Reason="", readiness=true. Elapsed: 20.234654293s Jan 21 13:16:59.518: INFO: Pod "pod-bd8887bc-aeb5-46c1-9ae4-aa1113864f62" satisfied condition "running" �[1mSTEP:�[0m Deleting the previously created pod �[38;5;243m01/21/23 13:16:59.64�[0m Jan 21 13:16:59.640: INFO: Deleting pod "pod-bd8887bc-aeb5-46c1-9ae4-aa1113864f62" in namespace "volume-expand-7662" Jan 21 13:16:59.764: INFO: Wait up to 5m0s for pod "pod-bd8887bc-aeb5-46c1-9ae4-aa1113864f62" to be fully deleted �[1mSTEP:�[0m Expanding current pvc �[38;5;243m01/21/23 13:17:03.99�[0m Jan 21 13:17:03.990: INFO: currentPvcSize {{1073741824 0} {<nil>} 1Gi BinarySI}, newSize {{2147483648 0} {<nil>} BinarySI} �[1mSTEP:�[0m Waiting for cloudprovider resize to finish �[38;5;243m01/21/23 13:17:04.22�[0m Jan 21 13:17:06.456: INFO: Unexpected error: While waiting for pvc resize to finish: <*errors.errorString | 0xc000f158c0>: { s: "error while waiting for controller resize to finish: error fetching pv \"pvc-1298849f-ef8e-431c-821b-761733729411\" for resizing Get \"https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/persistentvolumes/pvc-1298849f-ef8e-431c-821b-761733729411\": dial tcp 52.28.228.130:443: connect: connection refused", } Jan 21 13:17:06.456: FAIL: While waiting for pvc resize to finish: error while waiting for controller resize to finish: error fetching pv "pvc-1298849f-ef8e-431c-821b-761733729411" for resizing Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/persistentvolumes/pvc-1298849f-ef8e-431c-821b-761733729411": dial tcp 52.28.228.130:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/storage/testsuites.(*volumeExpandTestSuite).DefineTests.func4() test/e2e/storage/testsuites/volume_expand.go:222 +0xa85 Jan 21 13:17:06.456: INFO: Deleting pod "pod-bd8887bc-aeb5-46c1-9ae4-aa1113864f62" in namespace "volume-expand-7662" Jan 21 13:17:06.578: INFO: Unexpected error: while cleaning up pod already deleted in resize test: <*errors.errorString | 0xc0010ad3a0>: { s: "pod Delete API error: Delete \"https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces/volume-expand-7662/pods/pod-bd8887bc-aeb5-46c1-9ae4-aa1113864f62\": dial tcp 52.28.228.130:443: connect: connection refused", } Jan 21 13:17:06.578: FAIL: while cleaning up pod already deleted in resize test: pod Delete API error: Delete "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces/volume-expand-7662/pods/pod-bd8887bc-aeb5-46c1-9ae4-aa1113864f62": dial tcp 52.28.228.130:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/storage/testsuites.(*volumeExpandTestSuite).DefineTests.func4.1() test/e2e/storage/testsuites/volume_expand.go:196 +0xae panic({0x6ea5bc0, 0xc00443f640}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1() test/e2e/framework/ginkgowrapper/wrapper.go:73 +0x7d panic({0x6ea7de0, 0xc000298e00}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail({0xc00077cea0, 0x188}, {0xc0004f9998?, 0x735f76c?, 0xc0004f99b8?}) test/e2e/framework/ginkgowrapper/wrapper.go:77 +0x197 k8s.io/kubernetes/test/e2e/framework.Fail({0xc000160900, 0x173}, {0xc0004f9a30?, 0xc0019b6420?, 0xc0004f9a58?}) test/e2e/framework/log.go:63 +0x145 k8s.io/kubernetes/test/e2e/framework.ExpectNoErrorWithOffset(0x1, {0x7c2e560, 0xc000f158c0}, {0xc000f158e0?, 0x0?, 0x0?}) test/e2e/framework/expect.go:76 +0x267 k8s.io/kubernetes/test/e2e/framework.ExpectNoError(...) test/e2e/framework/expect.go:43 k8s.io/kubernetes/test/e2e/storage/testsuites.(*volumeExpandTestSuite).DefineTests.func4() test/e2e/storage/testsuites/volume_expand.go:222 +0xa85 �[1mSTEP:�[0m Deleting pod �[38;5;243m01/21/23 13:17:06.579�[0m Jan 21 13:17:06.579: INFO: Deleting pod "pod-bd8887bc-aeb5-46c1-9ae4-aa1113864f62" in namespace "volume-expand-7662" �[1mSTEP:�[0m Deleting pvc �[38;5;243m01/21/23 13:17:06.7�[0m Jan 21 13:17:06.858: INFO: Deleting PersistentVolumeClaim "ebs.csi.aws.comhk9g2" �[1mSTEP:�[0m Deleting sc �[38;5;243m01/21/23 13:17:06.983�[0m Jan 21 13:17:07.107: INFO: Unexpected error: while cleaning up resource: <errors.aggregate | len:2, cap:2>: [ <*errors.errorString | 0xc00107abd0>{ s: "pod Delete API error: Delete \"https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces/volume-expand-7662/pods/pod-bd8887bc-aeb5-46c1-9ae4-aa1113864f62\": dial tcp 52.28.228.130:443: connect: connection refused", }, <errors.aggregate | len:3, cap:4>[ <*fmt.wrapError | 0xc000d399c0>{ msg: "failed to find PVC ebs.csi.aws.comhk9g2: Get \"https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces/volume-expand-7662/persistentvolumeclaims/ebs.csi.aws.comhk9g2\": dial tcp 52.28.228.130:443: connect: connection refused", err: <*url.Error | 0xc0015b78c0>{ Op: "Get", URL: "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces/volume-expand-7662/persistentvolumeclaims/ebs.csi.aws.comhk9g2", Err: <*net.OpError | 0xc003e17590>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc00410a990>{ IP: [52, 28, 228, 130], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc000d39980>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, }, }, <*fmt.wrapError | 0xc003f7b0c0>{ msg: "failed to delete PVC ebs.csi.aws.comhk9g2: PVC Delete API error: Delete \"https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces/volume-expand-7662/persistentvolumeclaims/ebs.csi.aws.comhk9g2\": dial tcp 52.28.228.130:443: connect: connection refused", err: <*errors.errorString | 0xc00123e260>{ s: "PVC Delete API error: Delete \"https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces/volume-expand-7662/persistentvolumeclaims/ebs.csi.aws.comhk9g2\": dial tcp 52.28.228.130:443: connect: connection refused", }, }, <*fmt.wrapError | 0xc003f7b240>{ msg: "failed to delete StorageClass volume-expand-7662-e2e-sc5ck8r: Delete \"https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/apis/storage.k8s.io/v1/storageclasses/volume-expand-7662-e2e-sc5ck8r\": dial tcp 52.28.228.130:443: connect: connection refused", err: <*url.Error | 0xc00410b890>{ Op: "Delete", URL: "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/apis/storage.k8s.io/v1/storageclasses/volume-expand-7662-e2e-sc5ck8r", Err: <*net.OpError | 0xc004054190>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc001a5a6f0>{ IP: [52, 28, 228, 130], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc003f7b200>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, }, }, ], ] Jan 21 13:17:07.108: FAIL: while cleaning up resource: [pod Delete API error: Delete "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces/volume-expand-7662/pods/pod-bd8887bc-aeb5-46c1-9ae4-aa1113864f62": dial tcp 52.28.228.130:443: connect: connection refused, failed to find PVC ebs.csi.aws.comhk9g2: Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces/volume-expand-7662/persistentvolumeclaims/ebs.csi.aws.comhk9g2": dial tcp 52.28.228.130:443: connect: connection refused, failed to delete PVC ebs.csi.aws.comhk9g2: PVC Delete API error: Delete "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces/volume-expand-7662/persistentvolumeclaims/ebs.csi.aws.comhk9g2": dial tcp 52.28.228.130:443: connect: connection refused, failed to delete StorageClass volume-expand-7662-e2e-sc5ck8r: Delete "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/apis/storage.k8s.io/v1/storageclasses/volume-expand-7662-e2e-sc5ck8r": dial tcp 52.28.228.130:443: connect: connection refused] Full Stack Trace k8s.io/kubernetes/test/e2e/storage/testsuites.(*volumeExpandTestSuite).DefineTests.func2() test/e2e/storage/testsuites/volume_expand.go:154 +0x49a panic({0x6ea5bc0, 0xc00419b800}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1() test/e2e/framework/ginkgowrapper/wrapper.go:73 +0x7d panic({0x6ea7de0, 0xc000183500}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail({0xc0023f2420, 0x141}, {0xc0004f9328?, 0x735f76c?, 0xc0004f9348?}) test/e2e/framework/ginkgowrapper/wrapper.go:77 +0x197 k8s.io/kubernetes/test/e2e/framework.Fail({0xc002476500, 0x12c}, {0xc0004f93c0?, 0xc00010dc00?, 0xc0004f93e8?}) test/e2e/framework/log.go:63 +0x145 k8s.io/kubernetes/test/e2e/framework.ExpectNoErrorWithOffset(0x1, {0x7c2e560, 0xc0010ad3a0}, {0xc0010ad3b0?, 0x2725c0c?, 0x0?}) test/e2e/framework/expect.go:76 +0x267 k8s.io/kubernetes/test/e2e/framework.ExpectNoError(...) test/e2e/framework/expect.go:43 k8s.io/kubernetes/test/e2e/storage/testsuites.(*volumeExpandTestSuite).DefineTests.func4.1() test/e2e/storage/testsuites/volume_expand.go:196 +0xae panic({0x6ea5bc0, 0xc00443f640}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1() test/e2e/framework/ginkgowrapper/wrapper.go:73 +0x7d panic({0x6ea7de0, 0xc000298e00}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail({0xc00077cea0, 0x188}, {0xc0004f9998?, 0x735f76c?, 0xc0004f99b8?}) test/e2e/framework/ginkgowrapper/wrapper.go:77 +0x197 k8s.io/kubernetes/test/e2e/framework.Fail({0xc000160900, 0x173}, {0xc0004f9a30?, 0xc0019b6420?, 0xc0004f9a58?}) test/e2e/framework/log.go:63 +0x145 k8s.io/kubernetes/test/e2e/framework.ExpectNoErrorWithOffset(0x1, {0x7c2e560, 0xc000f158c0}, {0xc000f158e0?, 0x0?, 0x0?}) test/e2e/framework/expect.go:76 +0x267 k8s.io/kubernetes/test/e2e/framework.ExpectNoError(...) test/e2e/framework/expect.go:43 k8s.io/kubernetes/test/e2e/storage/testsuites.(*volumeExpandTestSuite).DefineTests.func4() test/e2e/storage/testsuites/volume_expand.go:222 +0xa85 [AfterEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand test/e2e/framework/framework.go:187 �[1mSTEP:�[0m Collecting events from namespace "volume-expand-7662". �[38;5;243m01/21/23 13:17:07.108�[0m Jan 21 13:17:07.233: INFO: Unexpected error: failed to list events in namespace "volume-expand-7662": <*url.Error | 0xc001a5abd0>: { Op: "Get", URL: "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces/volume-expand-7662/events", Err: <*net.OpError | 0xc003e17b80>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc001760780>{ IP: [52, 28, 228, 130], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc000d39b40>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Jan 21 13:17:07.233: FAIL: failed to list events in namespace "volume-expand-7662": Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces/volume-expand-7662/events": dial tcp 52.28.228.130:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework.dumpEventsInNamespace(0xc004549590, {0xc0035fd488, 0x12}) test/e2e/framework/util.go:901 +0x191 k8s.io/kubernetes/test/e2e/framework.DumpAllNamespaceInfo({0x7ca63f8, 0xc0038fa180}, {0xc0035fd488, 0x12}) test/e2e/framework/util.go:919 +0x8d k8s.io/kubernetes/test/e2e/framework.NewFramework.func1(0xc0015214a0, 0x2?) test/e2e/framework/framework.go:181 +0x8b k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc0015214a0) test/e2e/framework/framework.go:435 +0x21d �[1mSTEP:�[0m Destroying namespace "volume-expand-7662" for this suite. �[38;5;243m01/21/23 13:17:07.233�[0m Jan 21 13:17:07.358: FAIL: Couldn't delete ns: "volume-expand-7662": Delete "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces/volume-expand-7662": dial tcp 52.28.228.130:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces/volume-expand-7662", Err:(*net.OpError)(0xc004054640)}) Full Stack Trace panic({0x6ea5bc0, 0xc0006e0140}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1() test/e2e/framework/ginkgowrapper/wrapper.go:73 +0x7d panic({0x6ea7de0, 0xc0001ff8f0}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail({0xc00036ac60, 0x104}, {0xc004549048?, 0x735f76c?, 0xc004549068?}) test/e2e/framework/ginkgowrapper/wrapper.go:77 +0x197 k8s.io/kubernetes/test/e2e/framework.Fail({0xc0004c62d0, 0xef}, {0xc0045490e0?, 0xc00085bc80?, 0xc004549108?}) test/e2e/framework/log.go:63 +0x145 k8s.io/kubernetes/test/e2e/framework.ExpectNoErrorWithOffset(0x1, {0x7c388e0, 0xc001a5abd0}, {0xc000d39b80?, 0x0?, 0x0?}) test/e2e/framework/expect.go:76 +0x267 k8s.io/kubernetes/test/e2e/framework.ExpectNoError(...) test/e2e/framework/expect.go:43 k8s.io/kubernetes/test/e2e/framework.dumpEventsInNamespace(0xc004549590, {0xc0035fd488, 0x12}) test/e2e/framework/util.go:901 +0x191 k8s.io/kubernetes/test/e2e/framework.DumpAllNamespaceInfo({0x7ca63f8, 0xc0038fa180}, {0xc0035fd488, 0x12}) test/e2e/framework/util.go:919 +0x8d k8s.io/kubernetes/test/e2e/framework.NewFramework.func1(0xc0015214a0, 0x2?) test/e2e/framework/framework.go:181 +0x8b k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc0015214a0) test/e2e/framework/framework.go:435 +0x21d
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\sExternal\sStorage\s\[Driver\:\sebs\.csi\.aws\.com\]\s\[Testpattern\:\sDynamic\sPV\s\(block\svolmode\)\]\svolumes\sshould\sstore\sdata$'
test/e2e/storage/framework/volume_resource.go:306 k8s.io/kubernetes/test/e2e/storage/framework.createPVCPVFromDynamicProvisionSC(0xc0014d4000, {0xc0009b4740, 0xf}, {0xc003b64a38, 0x3}, 0xc003b5d980, {0x73624b4, 0x5}, {0x0, 0x0, ...}) test/e2e/storage/framework/volume_resource.go:306 +0x427 k8s.io/kubernetes/test/e2e/storage/framework.CreateVolumeResource({0x7c52198, 0xc000913a20}, 0xc00187d080, {{0x73ca3a3, 0x1a}, {0x0, 0x0}, {0x736c750, 0x9}, {0x0, ...}, ...}, ...) test/e2e/storage/framework/volume_resource.go:104 +0xc72 k8s.io/kubernetes/test/e2e/storage/testsuites.(*volumesTestSuite).DefineTests.func1() test/e2e/storage/testsuites/volumes.go:142 +0x28e k8s.io/kubernetes/test/e2e/storage/testsuites.(*volumesTestSuite).DefineTests.func3() test/e2e/storage/testsuites/volumes.go:162 +0x9efrom junit_01.xml
{"msg":"FAILED External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)] volumes should store data","completed":8,"skipped":77,"failed":1,"failures":["External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)] volumes should store data"]} [BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/21/23 13:17:03.943�[0m Jan 21 13:17:03.943: INFO: >>> kubeConfig: /root/.kube/config �[1mSTEP:�[0m Building a namespace api object, basename volume �[38;5;243m01/21/23 13:17:03.944�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/21/23 13:17:04.284�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/21/23 13:17:04.51�[0m [It] should store data test/e2e/storage/testsuites/volumes.go:161 Jan 21 13:17:04.733: INFO: Creating resource for dynamic PV Jan 21 13:17:04.733: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(ebs.csi.aws.com) supported size:{ 1Mi} �[1mSTEP:�[0m creating a StorageClass volume-3643-e2e-sc6blk7 �[38;5;243m01/21/23 13:17:04.734�[0m �[1mSTEP:�[0m creating a claim �[38;5;243m01/21/23 13:17:04.858�[0m Jan 21 13:17:05.135: INFO: Unexpected error: <*url.Error | 0xc002f4b9b0>: { Op: "Get", URL: "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces/volume-3643/persistentvolumeclaims/ebs.csi.aws.comsncgd", Err: <*net.OpError | 0xc00324bd60>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0031e2fc0>{ IP: [52, 28, 228, 130], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc0039911c0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Jan 21 13:17:05.135: FAIL: Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces/volume-3643/persistentvolumeclaims/ebs.csi.aws.comsncgd": dial tcp 52.28.228.130:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/storage/framework.createPVCPVFromDynamicProvisionSC(0xc0014d4000, {0xc0009b4740, 0xf}, {0xc003b64a38, 0x3}, 0xc003b5d980, {0x73624b4, 0x5}, {0x0, 0x0, ...}) test/e2e/storage/framework/volume_resource.go:306 +0x427 k8s.io/kubernetes/test/e2e/storage/framework.CreateVolumeResource({0x7c52198, 0xc000913a20}, 0xc00187d080, {{0x73ca3a3, 0x1a}, {0x0, 0x0}, {0x736c750, 0x9}, {0x0, ...}, ...}, ...) test/e2e/storage/framework/volume_resource.go:104 +0xc72 k8s.io/kubernetes/test/e2e/storage/testsuites.(*volumesTestSuite).DefineTests.func1() test/e2e/storage/testsuites/volumes.go:142 +0x28e k8s.io/kubernetes/test/e2e/storage/testsuites.(*volumesTestSuite).DefineTests.func3() test/e2e/storage/testsuites/volumes.go:162 +0x9e [AfterEach] [Testpattern: Dynamic PV (block volmode)] volumes test/e2e/framework/framework.go:187 �[1mSTEP:�[0m Collecting events from namespace "volume-3643". �[38;5;243m01/21/23 13:17:05.135�[0m Jan 21 13:17:05.263: INFO: Unexpected error: failed to list events in namespace "volume-3643": <*url.Error | 0xc003378660>: { Op: "Get", URL: "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces/volume-3643/events", Err: <*net.OpError | 0xc0033322d0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc003378630>{ IP: [52, 28, 228, 130], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc003991600>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Jan 21 13:17:05.263: FAIL: failed to list events in namespace "volume-3643": Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces/volume-3643/events": dial tcp 52.28.228.130:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework.dumpEventsInNamespace(0xc00258f590, {0xc002a2a5a0, 0xb}) test/e2e/framework/util.go:901 +0x191 k8s.io/kubernetes/test/e2e/framework.DumpAllNamespaceInfo({0x7ca63f8, 0xc002ec6900}, {0xc002a2a5a0, 0xb}) test/e2e/framework/util.go:919 +0x8d k8s.io/kubernetes/test/e2e/framework.NewFramework.func1(0xc0014d4000, 0x2?) test/e2e/framework/framework.go:181 +0x8b k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc0014d4000) test/e2e/framework/framework.go:435 +0x21d �[1mSTEP:�[0m Destroying namespace "volume-3643" for this suite. �[38;5;243m01/21/23 13:17:05.263�[0m Jan 21 13:17:05.386: FAIL: Couldn't delete ns: "volume-3643": Delete "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces/volume-3643": dial tcp 52.28.228.130:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces/volume-3643", Err:(*net.OpError)(0xc0037ada90)}) Full Stack Trace panic({0x6ea5bc0, 0xc002f637c0}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1() test/e2e/framework/ginkgowrapper/wrapper.go:73 +0x7d panic({0x6ea7de0, 0xc00332a700}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail({0xc003380700, 0xf6}, {0xc00258f048?, 0x735f76c?, 0xc00258f068?}) test/e2e/framework/ginkgowrapper/wrapper.go:77 +0x197 k8s.io/kubernetes/test/e2e/framework.Fail({0xc002ee82d0, 0xe1}, {0xc00258f0e0?, 0xc001056c60?, 0xc00258f108?}) test/e2e/framework/log.go:63 +0x145 k8s.io/kubernetes/test/e2e/framework.ExpectNoErrorWithOffset(0x1, {0x7c388e0, 0xc003378660}, {0xc003991640?, 0x0?, 0x0?}) test/e2e/framework/expect.go:76 +0x267 k8s.io/kubernetes/test/e2e/framework.ExpectNoError(...) test/e2e/framework/expect.go:43 k8s.io/kubernetes/test/e2e/framework.dumpEventsInNamespace(0xc00258f590, {0xc002a2a5a0, 0xb}) test/e2e/framework/util.go:901 +0x191 k8s.io/kubernetes/test/e2e/framework.DumpAllNamespaceInfo({0x7ca63f8, 0xc002ec6900}, {0xc002a2a5a0, 0xb}) test/e2e/framework/util.go:919 +0x8d k8s.io/kubernetes/test/e2e/framework.NewFramework.func1(0xc0014d4000, 0x2?) test/e2e/framework/framework.go:181 +0x8b k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc0014d4000) test/e2e/framework/framework.go:435 +0x21d
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\sExternal\sStorage\s\[Driver\:\sebs\.csi\.aws\.com\]\s\[Testpattern\:\sDynamic\sPV\s\(default\sfs\)\(allowExpansion\)\]\svolume\-expand\sshould\sresize\svolume\swhen\sPVC\sis\sedited\swhile\spod\sis\susing\sit$'
test/e2e/storage/testsuites/volume_expand.go:298 k8s.io/kubernetes/test/e2e/storage/testsuites.(*volumeExpandTestSuite).DefineTests.func5() test/e2e/storage/testsuites/volume_expand.go:298 +0xa9dfrom junit_01.xml
{"msg":"FAILED External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it","completed":7,"skipped":56,"failed":1,"failures":["External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it"]} [BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/21/23 13:15:50.417�[0m Jan 21 13:15:50.417: INFO: >>> kubeConfig: /root/.kube/config �[1mSTEP:�[0m Building a namespace api object, basename volume-expand �[38;5;243m01/21/23 13:15:50.418�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/21/23 13:15:50.76�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/21/23 13:15:50.979�[0m [It] should resize volume when PVC is edited while pod is using it test/e2e/storage/testsuites/volume_expand.go:252 Jan 21 13:15:51.200: INFO: Creating resource for dynamic PV Jan 21 13:15:51.200: INFO: Using claimSize:1Gi, test suite supported size:{ 1Gi}, driver(ebs.csi.aws.com) supported size:{ 1Gi} �[1mSTEP:�[0m creating a StorageClass volume-expand-1294-e2e-sckm9xk �[38;5;243m01/21/23 13:15:51.2�[0m �[1mSTEP:�[0m creating a claim �[38;5;243m01/21/23 13:15:51.312�[0m Jan 21 13:15:51.312: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil �[1mSTEP:�[0m Creating a pod with dynamically provisioned volume �[38;5;243m01/21/23 13:15:51.535�[0m Jan 21 13:15:51.650: INFO: Waiting up to 5m0s for pod "pod-bd48bf9d-c0ff-4004-8308-b0f119d5f02c" in namespace "volume-expand-1294" to be "running" Jan 21 13:15:51.760: INFO: Pod "pod-bd48bf9d-c0ff-4004-8308-b0f119d5f02c": Phase="Pending", Reason="", readiness=false. Elapsed: 110.258139ms Jan 21 13:15:53.871: INFO: Pod "pod-bd48bf9d-c0ff-4004-8308-b0f119d5f02c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.221839743s Jan 21 13:15:55.872: INFO: Pod "pod-bd48bf9d-c0ff-4004-8308-b0f119d5f02c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.222402237s Jan 21 13:15:57.879: INFO: Pod "pod-bd48bf9d-c0ff-4004-8308-b0f119d5f02c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.22948061s Jan 21 13:15:59.872: INFO: Pod "pod-bd48bf9d-c0ff-4004-8308-b0f119d5f02c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.222635093s Jan 21 13:16:01.871: INFO: Pod "pod-bd48bf9d-c0ff-4004-8308-b0f119d5f02c": Phase="Pending", Reason="", readiness=false. Elapsed: 10.221463661s Jan 21 13:16:03.873: INFO: Pod "pod-bd48bf9d-c0ff-4004-8308-b0f119d5f02c": Phase="Pending", Reason="", readiness=false. Elapsed: 12.223502771s Jan 21 13:16:05.871: INFO: Pod "pod-bd48bf9d-c0ff-4004-8308-b0f119d5f02c": Phase="Pending", Reason="", readiness=false. Elapsed: 14.221648896s Jan 21 13:16:07.873: INFO: Pod "pod-bd48bf9d-c0ff-4004-8308-b0f119d5f02c": Phase="Running", Reason="", readiness=true. Elapsed: 16.223671037s Jan 21 13:16:07.873: INFO: Pod "pod-bd48bf9d-c0ff-4004-8308-b0f119d5f02c" satisfied condition "running" �[1mSTEP:�[0m Expanding current pvc �[38;5;243m01/21/23 13:16:07.985�[0m Jan 21 13:16:07.985: INFO: currentPvcSize {{1073741824 0} {<nil>} 1Gi BinarySI}, newSize {{2147483648 0} {<nil>} BinarySI} �[1mSTEP:�[0m Waiting for cloudprovider resize to finish �[38;5;243m01/21/23 13:16:08.207�[0m �[1mSTEP:�[0m Waiting for file system resize to finish �[38;5;243m01/21/23 13:16:12.434�[0m Jan 21 13:17:06.666: INFO: Unexpected error: while waiting for fs resize to finish: <*errors.errorString | 0xc0002ee280>: { s: "error waiting for pvc \"ebs.csi.aws.comq2bpt\" filesystem resize to finish: error fetching pvc \"ebs.csi.aws.comq2bpt\" for checking for resize status : Get \"https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces/volume-expand-1294/persistentvolumeclaims/ebs.csi.aws.comq2bpt\": dial tcp 52.28.228.130:443: connect: connection refused", } Jan 21 13:17:06.666: FAIL: while waiting for fs resize to finish: error waiting for pvc "ebs.csi.aws.comq2bpt" filesystem resize to finish: error fetching pvc "ebs.csi.aws.comq2bpt" for checking for resize status : Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces/volume-expand-1294/persistentvolumeclaims/ebs.csi.aws.comq2bpt": dial tcp 52.28.228.130:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/storage/testsuites.(*volumeExpandTestSuite).DefineTests.func5() test/e2e/storage/testsuites/volume_expand.go:298 +0xa9d Jan 21 13:17:06.666: INFO: Deleting pod "pod-bd48bf9d-c0ff-4004-8308-b0f119d5f02c" in namespace "volume-expand-1294" Jan 21 13:17:06.794: INFO: Unexpected error: while cleaning up pod already deleted in resize test: <*errors.errorString | 0xc0002a2ca0>: { s: "pod Delete API error: Delete \"https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces/volume-expand-1294/pods/pod-bd48bf9d-c0ff-4004-8308-b0f119d5f02c\": dial tcp 52.28.228.130:443: connect: connection refused", } Jan 21 13:17:06.794: FAIL: while cleaning up pod already deleted in resize test: pod Delete API error: Delete "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces/volume-expand-1294/pods/pod-bd48bf9d-c0ff-4004-8308-b0f119d5f02c": dial tcp 52.28.228.130:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/storage/testsuites.(*volumeExpandTestSuite).DefineTests.func5.1() test/e2e/storage/testsuites/volume_expand.go:272 +0xae panic({0x6ea5bc0, 0xc002d0f240}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1() test/e2e/framework/ginkgowrapper/wrapper.go:73 +0x7d panic({0x6ea7de0, 0xc000678620}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail({0xc0006d0c40, 0x1ac}, {0xc0014cfa88?, 0x735f76c?, 0xc0014cfaa8?}) test/e2e/framework/ginkgowrapper/wrapper.go:77 +0x197 k8s.io/kubernetes/test/e2e/framework.Fail({0xc0023a7a00, 0x197}, {0xc0014cfb20?, 0xc0012c4a80?, 0xc0014cfb48?}) test/e2e/framework/log.go:63 +0x145 k8s.io/kubernetes/test/e2e/framework.ExpectNoErrorWithOffset(0x1, {0x7c2e560, 0xc0002ee280}, {0xc0002ee290?, 0x0?, 0x0?}) test/e2e/framework/expect.go:76 +0x267 k8s.io/kubernetes/test/e2e/framework.ExpectNoError(...) test/e2e/framework/expect.go:43 k8s.io/kubernetes/test/e2e/storage/testsuites.(*volumeExpandTestSuite).DefineTests.func5() test/e2e/storage/testsuites/volume_expand.go:298 +0xa9d �[1mSTEP:�[0m Deleting pod �[38;5;243m01/21/23 13:17:06.794�[0m Jan 21 13:17:06.794: INFO: Deleting pod "pod-bd48bf9d-c0ff-4004-8308-b0f119d5f02c" in namespace "volume-expand-1294" �[1mSTEP:�[0m Deleting sc �[38;5;243m01/21/23 13:17:06.917�[0m Jan 21 13:17:07.042: INFO: Unexpected error: while cleaning up resource: <errors.aggregate | len:2, cap:2>: [ <*errors.errorString | 0xc0002ee8b0>{ s: "pod Delete API error: Delete \"https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces/volume-expand-1294/pods/pod-bd48bf9d-c0ff-4004-8308-b0f119d5f02c\": dial tcp 52.28.228.130:443: connect: connection refused", }, <errors.aggregate | len:1, cap:1>[ <*fmt.wrapError | 0xc0034dd020>{ msg: "failed to delete StorageClass volume-expand-1294-e2e-sckm9xk: Delete \"https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/apis/storage.k8s.io/v1/storageclasses/volume-expand-1294-e2e-sckm9xk\": dial tcp 52.28.228.130:443: connect: connection refused", err: <*url.Error | 0xc00059fc20>{ Op: "Delete", URL: "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/apis/storage.k8s.io/v1/storageclasses/volume-expand-1294-e2e-sckm9xk", Err: <*net.OpError | 0xc0034191d0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0030e8960>{ IP: [52, 28, 228, 130], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc0034dcfe0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, }, }, ], ] Jan 21 13:17:07.042: FAIL: while cleaning up resource: [pod Delete API error: Delete "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces/volume-expand-1294/pods/pod-bd48bf9d-c0ff-4004-8308-b0f119d5f02c": dial tcp 52.28.228.130:443: connect: connection refused, failed to delete StorageClass volume-expand-1294-e2e-sckm9xk: Delete "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/apis/storage.k8s.io/v1/storageclasses/volume-expand-1294-e2e-sckm9xk": dial tcp 52.28.228.130:443: connect: connection refused] Full Stack Trace k8s.io/kubernetes/test/e2e/storage/testsuites.(*volumeExpandTestSuite).DefineTests.func2() test/e2e/storage/testsuites/volume_expand.go:154 +0x49a panic({0x6ea5bc0, 0xc0030bdc00}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1() test/e2e/framework/ginkgowrapper/wrapper.go:73 +0x7d panic({0x6ea7de0, 0xc000125c00}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail({0xc0000fd1e0, 0x141}, {0xc0014cf418?, 0x735f76c?, 0xc0014cf438?}) test/e2e/framework/ginkgowrapper/wrapper.go:77 +0x197 k8s.io/kubernetes/test/e2e/framework.Fail({0xc0003c3400, 0x12c}, {0xc0014cf4b0?, 0xc001d75100?, 0xc0014cf4d8?}) test/e2e/framework/log.go:63 +0x145 k8s.io/kubernetes/test/e2e/framework.ExpectNoErrorWithOffset(0x1, {0x7c2e560, 0xc0002a2ca0}, {0xc0002a2cb0?, 0x2725c0c?, 0x0?}) test/e2e/framework/expect.go:76 +0x267 k8s.io/kubernetes/test/e2e/framework.ExpectNoError(...) test/e2e/framework/expect.go:43 k8s.io/kubernetes/test/e2e/storage/testsuites.(*volumeExpandTestSuite).DefineTests.func5.1() test/e2e/storage/testsuites/volume_expand.go:272 +0xae panic({0x6ea5bc0, 0xc002d0f240}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1() test/e2e/framework/ginkgowrapper/wrapper.go:73 +0x7d panic({0x6ea7de0, 0xc000678620}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail({0xc0006d0c40, 0x1ac}, {0xc0014cfa88?, 0x735f76c?, 0xc0014cfaa8?}) test/e2e/framework/ginkgowrapper/wrapper.go:77 +0x197 k8s.io/kubernetes/test/e2e/framework.Fail({0xc0023a7a00, 0x197}, {0xc0014cfb20?, 0xc0012c4a80?, 0xc0014cfb48?}) test/e2e/framework/log.go:63 +0x145 k8s.io/kubernetes/test/e2e/framework.ExpectNoErrorWithOffset(0x1, {0x7c2e560, 0xc0002ee280}, {0xc0002ee290?, 0x0?, 0x0?}) test/e2e/framework/expect.go:76 +0x267 k8s.io/kubernetes/test/e2e/framework.ExpectNoError(...) test/e2e/framework/expect.go:43 k8s.io/kubernetes/test/e2e/storage/testsuites.(*volumeExpandTestSuite).DefineTests.func5() test/e2e/storage/testsuites/volume_expand.go:298 +0xa9d [AfterEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand test/e2e/framework/framework.go:187 �[1mSTEP:�[0m Collecting events from namespace "volume-expand-1294". �[38;5;243m01/21/23 13:17:07.043�[0m Jan 21 13:17:07.173: INFO: Unexpected error: failed to list events in namespace "volume-expand-1294": <*url.Error | 0xc002dc0300>: { Op: "Get", URL: "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces/volume-expand-1294/events", Err: <*net.OpError | 0xc002dd2e10>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc002dc02d0>{ IP: [52, 28, 228, 130], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc003efffc0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Jan 21 13:17:07.173: FAIL: failed to list events in namespace "volume-expand-1294": Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces/volume-expand-1294/events": dial tcp 52.28.228.130:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework.dumpEventsInNamespace(0xc002ad5590, {0xc00343e378, 0x12}) test/e2e/framework/util.go:901 +0x191 k8s.io/kubernetes/test/e2e/framework.DumpAllNamespaceInfo({0x7ca63f8, 0xc00263ea80}, {0xc00343e378, 0x12}) test/e2e/framework/util.go:919 +0x8d k8s.io/kubernetes/test/e2e/framework.NewFramework.func1(0xc001473ce0, 0x2?) test/e2e/framework/framework.go:181 +0x8b k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc001473ce0) test/e2e/framework/framework.go:435 +0x21d �[1mSTEP:�[0m Destroying namespace "volume-expand-1294" for this suite. �[38;5;243m01/21/23 13:17:07.174�[0m Jan 21 13:17:07.300: FAIL: Couldn't delete ns: "volume-expand-1294": Delete "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces/volume-expand-1294": dial tcp 52.28.228.130:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces/volume-expand-1294", Err:(*net.OpError)(0xc003419680)}) Full Stack Trace panic({0x6ea5bc0, 0xc0034e8c00}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1() test/e2e/framework/ginkgowrapper/wrapper.go:73 +0x7d panic({0x6ea7de0, 0xc00067e460}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail({0xc0026b06c0, 0x104}, {0xc002ad5048?, 0x735f76c?, 0xc002ad5068?}) test/e2e/framework/ginkgowrapper/wrapper.go:77 +0x197 k8s.io/kubernetes/test/e2e/framework.Fail({0xc00420c960, 0xef}, {0xc002ad50e0?, 0xc00193b8c0?, 0xc002ad5108?}) test/e2e/framework/log.go:63 +0x145 k8s.io/kubernetes/test/e2e/framework.ExpectNoErrorWithOffset(0x1, {0x7c388e0, 0xc002dc0300}, {0xc001fbe000?, 0x0?, 0x0?}) test/e2e/framework/expect.go:76 +0x267 k8s.io/kubernetes/test/e2e/framework.ExpectNoError(...) test/e2e/framework/expect.go:43 k8s.io/kubernetes/test/e2e/framework.dumpEventsInNamespace(0xc002ad5590, {0xc00343e378, 0x12}) test/e2e/framework/util.go:901 +0x191 k8s.io/kubernetes/test/e2e/framework.DumpAllNamespaceInfo({0x7ca63f8, 0xc00263ea80}, {0xc00343e378, 0x12}) test/e2e/framework/util.go:919 +0x8d k8s.io/kubernetes/test/e2e/framework.NewFramework.func1(0xc001473ce0, 0x2?) test/e2e/framework/framework.go:181 +0x8b k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc001473ce0) test/e2e/framework/framework.go:435 +0x21d
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\sExternal\sStorage\s\[Driver\:\sebs\.csi\.aws\.com\]\s\[Testpattern\:\sDynamic\sPV\s\(default\sfs\)\]\sfsgroupchangepolicy\s\(OnRootMismatch\)\[LinuxOnly\]\,\spod\screated\swith\san\sinitial\sfsgroup\,\svolume\scontents\sownership\schanged\svia\schgrp\sin\sfirst\spod\,\snew\spod\swith\sdifferent\sfsgroup\sapplied\sto\sthe\svolume\scontents$'
test/e2e/framework/framework.go:244 k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc0015b3600) test/e2e/framework/framework.go:244 +0x7bffrom junit_01.xml
{"msg":"FAILED External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with different fsgroup applied to the volume contents","completed":9,"skipped":37,"failed":2,"failures":["External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand Verify if offline PVC expansion works","External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with different fsgroup applied to the volume contents"]} [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/21/23 13:17:07.36�[0m Jan 21 13:17:07.361: INFO: >>> kubeConfig: /root/.kube/config �[1mSTEP:�[0m Building a namespace api object, basename fsgroupchangepolicy �[38;5;243m01/21/23 13:17:07.361�[0m Jan 21 13:17:07.485: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:09.608: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:11.606: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:13.611: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:15.609: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:17.611: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:19.610: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:21.610: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:23.608: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:25.614: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:42.956: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:43.077: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:43.077: INFO: Unexpected error: <*errors.errorString | 0xc0000c5c70>: { s: "timed out waiting for the condition", } Jan 21 13:17:43.077: FAIL: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc0015b3600) test/e2e/framework/framework.go:244 +0x7bf [AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy test/e2e/framework/framework.go:187 Jan 21 13:17:43.077: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready Jan 21 13:17:43.200: FAIL: All nodes should be ready after test, Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 52.28.228.130:443: connect: connection refused Full Stack Trace
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-api\-machinery\]\sCustomResourceDefinition\sWatch\s\[Privileged\:ClusterAdmin\]\sCustomResourceDefinition\sWatch\swatch\son\scustom\sresource\sdefinition\sobjects\s\[Conformance\]$'
test/e2e/apimachinery/crd_watch.go:114 k8s.io/kubernetes/test/e2e/apimachinery.glob..func7.1.1() test/e2e/apimachinery/crd_watch.go:114 +0x1210from junit_01.xml
{"msg":"FAILED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","completed":6,"skipped":35,"failed":1,"failures":["[sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]"]} [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/21/23 13:16:34.986�[0m Jan 21 13:16:34.986: INFO: >>> kubeConfig: /root/.kube/config �[1mSTEP:�[0m Building a namespace api object, basename crd-watch �[38;5;243m01/21/23 13:16:34.987�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/21/23 13:16:35.381�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/21/23 13:16:35.611�[0m [It] watch on custom resource definition objects [Conformance] test/e2e/apimachinery/crd_watch.go:51 Jan 21 13:16:35.842: INFO: >>> kubeConfig: /root/.kube/config �[1mSTEP:�[0m Creating first CR �[38;5;243m01/21/23 13:16:38.898�[0m Jan 21 13:16:39.019: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2023-01-21T13:16:38Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2023-01-21T13:16:38Z]] name:name1 resourceVersion:10314 uid:c9aa62fd-b499-4faf-8265-49addf9b2a42] num:map[num1:9223372036854775807 num2:1000000]]} �[1mSTEP:�[0m Creating second CR �[38;5;243m01/21/23 13:16:49.019�[0m Jan 21 13:16:49.138: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2023-01-21T13:16:49Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2023-01-21T13:16:49Z]] name:name2 resourceVersion:10712 uid:cdc69f79-828e-4e86-9103-89e781fbcdf2] num:map[num1:9223372036854775807 num2:1000000]]} �[1mSTEP:�[0m Modifying first CR �[38;5;243m01/21/23 13:16:59.139�[0m Jan 21 13:16:59.256: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2023-01-21T13:16:38Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2023-01-21T13:16:59Z]] name:name1 resourceVersion:11094 uid:c9aa62fd-b499-4faf-8265-49addf9b2a42] num:map[num1:9223372036854775807 num2:1000000]]} �[1mSTEP:�[0m Modifying second CR �[38;5;243m01/21/23 13:17:09.257�[0m Jan 21 13:17:09.385: INFO: Unexpected error: failed to patch custom resource: name2: <*url.Error | 0xc003cfa030>: { Op: "Patch", URL: "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/apis/mygroup.example.com/v1beta1/noxus/name2", Err: <*net.OpError | 0xc002ca0140>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc002f6fec0>{ IP: [52, 28, 228, 130], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc002b9f180>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Jan 21 13:17:09.386: FAIL: failed to patch custom resource: name2: Patch "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/apis/mygroup.example.com/v1beta1/noxus/name2": dial tcp 52.28.228.130:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/apimachinery.glob..func7.1.1() test/e2e/apimachinery/crd_watch.go:114 +0x1210 Jan 21 13:17:09.507: FAIL: failed to delete CustomResourceDefinition: Delete "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/apis/apiextensions.k8s.io/v1/customresourcedefinitions/noxus.mygroup.example.com": dial tcp 52.28.228.130:443: connect: connection refused Full Stack Trace panic({0x6ea5bc0, 0xc003533f80}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1() test/e2e/framework/ginkgowrapper/wrapper.go:73 +0x7d panic({0x6ea7de0, 0xc000537c70}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail({0xc0024c3a00, 0xf6}, {0xc00322dc90?, 0x735f76c?, 0xc00322dcb0?}) test/e2e/framework/ginkgowrapper/wrapper.go:77 +0x197 k8s.io/kubernetes/test/e2e/framework.Fail({0xc0006e94a0, 0xe1}, {0xc00322dd28?, 0xc002c98cc0?, 0xc00322dd50?}) test/e2e/framework/log.go:63 +0x145 k8s.io/kubernetes/test/e2e/framework.ExpectNoErrorWithOffset(0x1, {0x7c388e0, 0xc003cfa030}, {0xc002b9f1c0?, 0x0?, 0x0?}) test/e2e/framework/expect.go:76 +0x267 k8s.io/kubernetes/test/e2e/framework.ExpectNoError(...) test/e2e/framework/expect.go:43 k8s.io/kubernetes/test/e2e/apimachinery.glob..func7.1.1() test/e2e/apimachinery/crd_watch.go:114 +0x1210 [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] test/e2e/framework/framework.go:187 �[1mSTEP:�[0m Collecting events from namespace "crd-watch-8639". �[38;5;243m01/21/23 13:17:09.508�[0m Jan 21 13:17:09.632: INFO: Unexpected error: failed to list events in namespace "crd-watch-8639": <*url.Error | 0xc003cfb110>: { Op: "Get", URL: "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces/crd-watch-8639/events", Err: <*net.OpError | 0xc002ca07d0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc003cfb0e0>{ IP: [52, 28, 228, 130], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc002b9f720>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Jan 21 13:17:09.633: FAIL: failed to list events in namespace "crd-watch-8639": Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces/crd-watch-8639/events": dial tcp 52.28.228.130:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework.dumpEventsInNamespace(0xc00424d590, {0xc001454c10, 0xe}) test/e2e/framework/util.go:901 +0x191 k8s.io/kubernetes/test/e2e/framework.DumpAllNamespaceInfo({0x7ca63f8, 0xc002992300}, {0xc001454c10, 0xe}) test/e2e/framework/util.go:919 +0x8d k8s.io/kubernetes/test/e2e/framework.NewFramework.func1(0xc000d671e0, 0x2?) test/e2e/framework/framework.go:181 +0x8b k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc000d671e0) test/e2e/framework/framework.go:435 +0x21d �[1mSTEP:�[0m Destroying namespace "crd-watch-8639" for this suite. �[38;5;243m01/21/23 13:17:09.633�[0m Jan 21 13:17:09.757: FAIL: Couldn't delete ns: "crd-watch-8639": Delete "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces/crd-watch-8639": dial tcp 52.28.228.130:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces/crd-watch-8639", Err:(*net.OpError)(0xc003a7af00)}) Full Stack Trace panic({0x6ea5bc0, 0xc000bf2b80}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1() test/e2e/framework/ginkgowrapper/wrapper.go:73 +0x7d panic({0x6ea7de0, 0xc000609a40}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail({0xc002cdcc00, 0xfc}, {0xc00424d048?, 0x735f76c?, 0xc00424d068?}) test/e2e/framework/ginkgowrapper/wrapper.go:77 +0x197 k8s.io/kubernetes/test/e2e/framework.Fail({0xc0006e9590, 0xe7}, {0xc00424d0e0?, 0xc002c99e00?, 0xc00424d108?}) test/e2e/framework/log.go:63 +0x145 k8s.io/kubernetes/test/e2e/framework.ExpectNoErrorWithOffset(0x1, {0x7c388e0, 0xc003cfb110}, {0xc002b9f760?, 0x0?, 0x0?}) test/e2e/framework/expect.go:76 +0x267 k8s.io/kubernetes/test/e2e/framework.ExpectNoError(...) test/e2e/framework/expect.go:43 k8s.io/kubernetes/test/e2e/framework.dumpEventsInNamespace(0xc00424d590, {0xc001454c10, 0xe}) test/e2e/framework/util.go:901 +0x191 k8s.io/kubernetes/test/e2e/framework.DumpAllNamespaceInfo({0x7ca63f8, 0xc002992300}, {0xc001454c10, 0xe}) test/e2e/framework/util.go:919 +0x8d k8s.io/kubernetes/test/e2e/framework.NewFramework.func1(0xc000d671e0, 0x2?) test/e2e/framework/framework.go:181 +0x8b k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc000d671e0) test/e2e/framework/framework.go:435 +0x21d
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-api\-machinery\]\sCustomResourceDefinition\sresources\s\[Privileged\:ClusterAdmin\]\sSimple\sCustomResourceDefinition\slisting\scustom\sresource\sdefinition\sobjects\sworks\s\s\[Conformance\]$'
test/e2e/framework/framework.go:244 k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc000e73760) test/e2e/framework/framework.go:244 +0x7bffrom junit_01.xml
{"msg":"FAILED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","completed":6,"skipped":32,"failed":2,"failures":["[sig-storage] CSI mock volume CSI workload information using mock driver should not be passed when podInfoOnMount=false","[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]"]} [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/21/23 13:17:06.5�[0m Jan 21 13:17:06.501: INFO: >>> kubeConfig: /root/.kube/config �[1mSTEP:�[0m Building a namespace api object, basename custom-resource-definition �[38;5;243m01/21/23 13:17:06.502�[0m Jan 21 13:17:06.626: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:08.754: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:10.754: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:12.751: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:14.751: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:16.752: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:18.754: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:20.751: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:22.754: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:24.751: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:42.189: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:42.318: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:42.319: INFO: Unexpected error: <*errors.errorString | 0xc0000c5b60>: { s: "timed out waiting for the condition", } Jan 21 13:17:42.319: FAIL: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc000e73760) test/e2e/framework/framework.go:244 +0x7bf [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] test/e2e/framework/framework.go:187 Jan 21 13:17:42.319: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready Jan 21 13:17:42.448: FAIL: All nodes should be ready after test, Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 52.28.228.130:443: connect: connection refused Full Stack Trace
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-api\-machinery\]\sCustomResourcePublishOpenAPI\s\[Privileged\:ClusterAdmin\]\supdates\sthe\spublished\sspec\swhen\sone\sversion\sgets\srenamed\s\[Conformance\]$'
test/e2e/framework/framework.go:244 k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc000de5340) test/e2e/framework/framework.go:244 +0x7bffrom junit_01.xml
{"msg":"FAILED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","completed":5,"skipped":72,"failed":2,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should implement legacy replacement when the update strategy is OnDelete","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]"]} [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/21/23 13:17:11.2�[0m Jan 21 13:17:11.200: INFO: >>> kubeConfig: /root/.kube/config �[1mSTEP:�[0m Building a namespace api object, basename crd-publish-openapi �[38;5;243m01/21/23 13:17:11.202�[0m Jan 21 13:17:11.329: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:13.458: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:15.452: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:17.455: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:19.455: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:21.473: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:23.457: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:25.455: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:42.700: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:42.824: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:42.824: INFO: Unexpected error: <*errors.errorString | 0xc0002378f0>: { s: "timed out waiting for the condition", } Jan 21 13:17:42.824: FAIL: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc000de5340) test/e2e/framework/framework.go:244 +0x7bf [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] test/e2e/framework/framework.go:187 Jan 21 13:17:42.825: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready Jan 21 13:17:42.951: FAIL: All nodes should be ready after test, Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 52.28.228.130:443: connect: connection refused Full Stack Trace
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-api\-machinery\]\sCustomResourcePublishOpenAPI\s\[Privileged\:ClusterAdmin\]\sworks\sfor\sCRD\spreserving\sunknown\sfields\sat\sthe\sschema\sroot\s\[Conformance\]$'
test/e2e/framework/framework.go:244 k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc000dd6420) test/e2e/framework/framework.go:244 +0x7bffrom junit_01.xml
{"msg":"FAILED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","completed":13,"skipped":95,"failed":2,"failures":["[sig-storage] PersistentVolumes-expansion loopback local block volume should support online expansion on node","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]"]} [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/21/23 13:17:06.03�[0m Jan 21 13:17:06.030: INFO: >>> kubeConfig: /root/.kube/config �[1mSTEP:�[0m Building a namespace api object, basename crd-publish-openapi �[38;5;243m01/21/23 13:17:06.031�[0m Jan 21 13:17:06.156: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:08.280: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:10.280: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:12.280: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:14.279: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:16.285: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:18.280: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:20.282: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:22.280: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:24.279: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:26.282: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:43.724: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:43.852: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:43.852: INFO: Unexpected error: <*errors.errorString | 0xc00011dbf0>: { s: "timed out waiting for the condition", } Jan 21 13:17:43.852: FAIL: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc000dd6420) test/e2e/framework/framework.go:244 +0x7bf [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] test/e2e/framework/framework.go:187 Jan 21 13:17:43.852: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready Jan 21 13:17:43.975: FAIL: All nodes should be ready after test, Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 52.28.228.130:443: connect: connection refused Full Stack Trace
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-apps\]\sCronJob\sshould\ssupport\sCronJob\sAPI\soperations\s\[Conformance\]$'
test/e2e/framework/framework.go:244 k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc000cc5ce0) test/e2e/framework/framework.go:244 +0x7bffrom junit_01.xml
{"msg":"FAILED [sig-apps] CronJob should support CronJob API operations [Conformance]","completed":7,"skipped":68,"failed":2,"failures":["[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount","[sig-apps] CronJob should support CronJob API operations [Conformance]"]} [BeforeEach] [sig-apps] CronJob test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/21/23 13:17:05.388�[0m Jan 21 13:17:05.388: INFO: >>> kubeConfig: /root/.kube/config �[1mSTEP:�[0m Building a namespace api object, basename cronjob �[38;5;243m01/21/23 13:17:05.39�[0m Jan 21 13:17:05.512: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:07.638: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:09.633: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:11.638: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:13.635: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:15.637: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:17.636: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:19.640: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:21.638: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:23.636: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:25.771: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:42.956: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:43.095: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:43.095: INFO: Unexpected error: <*errors.errorString | 0xc000195c50>: { s: "timed out waiting for the condition", } Jan 21 13:17:43.095: FAIL: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc000cc5ce0) test/e2e/framework/framework.go:244 +0x7bf [AfterEach] [sig-apps] CronJob test/e2e/framework/framework.go:187 Jan 21 13:17:43.096: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready Jan 21 13:17:43.219: FAIL: All nodes should be ready after test, Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 52.28.228.130:443: connect: connection refused Full Stack Trace
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-apps\]\sStatefulSet\sBasic\sStatefulSet\sfunctionality\s\[StatefulSetBasic\]\sshould\simplement\slegacy\sreplacement\swhen\sthe\supdate\sstrategy\sis\sOnDelete$'
test/e2e/framework/statefulset/rest.go:68 k8s.io/kubernetes/test/e2e/framework/statefulset.GetPodList({0x7ca63f8, 0xc003e31200}, 0xc000a7f400) test/e2e/framework/statefulset/rest.go:68 +0x153 k8s.io/kubernetes/test/e2e/framework/statefulset.Scale.func2() test/e2e/framework/statefulset/rest.go:154 +0x35 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2683cf1, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 +0x1b k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7c6a688?, 0xc0001ac000?}, 0x7ca63f8?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 +0x57 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7c6a688, 0xc0001ac000}, 0xc0031089f0, 0x2ef480a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 +0x10c k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7c6a688, 0xc0001ac000}, 0xb0?, 0x2ef33a5?, 0x28?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 +0x9a k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7c6a688, 0xc0001ac000}, 0x735f76c?, 0xc004063100?, 0x256a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 +0x4a k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x7ca63f8?, 0xc003e31200?, 0xc002cd80d0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 +0x50 k8s.io/kubernetes/test/e2e/framework/statefulset.Scale({0x7ca63f8?, 0xc003e31200}, 0x0?, 0x0) test/e2e/framework/statefulset/rest.go:153 +0x22d k8s.io/kubernetes/test/e2e/framework/statefulset.DeleteAllStatefulSets({0x7ca63f8, 0xc003e31200}, {0xc002fd0190, 0x10}) test/e2e/framework/statefulset/rest.go:83 +0x1f7 k8s.io/kubernetes/test/e2e/apps.glob..func10.2.2() test/e2e/apps/statefulset.go:127 +0x1b2from junit_01.xml
E0121 13:17:10.938486 6850 runtime.go:79] Observed a panic: ginkgowrapper.FailurePanic{Message:"Jan 21 13:17:10.938: Get \"https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces/statefulset-6971/pods?labelSelector=baz%3Dblah%2Cfoo%3Dbar\": dial tcp 52.28.228.130:443: connect: connection refused", Filename:"test/e2e/framework/statefulset/rest.go", Line:68, FullStackTrace:"k8s.io/kubernetes/test/e2e/framework/statefulset.GetPodList({0x7ca63f8, 0xc003e31200}, 0xc000a7f400)\n\ttest/e2e/framework/statefulset/rest.go:68 +0x153\nk8s.io/kubernetes/test/e2e/framework/statefulset.Scale.func2()\n\ttest/e2e/framework/statefulset/rest.go:154 +0x35\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2683cf1, 0x0})\n\tvendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 +0x1b\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7c6a688?, 0xc0001ac000?}, 0x7ca63f8?)\n\tvendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 +0x57\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7c6a688, 0xc0001ac000}, 0xc0031089f0, 0x2ef480a?)\n\tvendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 +0x10c\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7c6a688, 0xc0001ac000}, 0xb0?, 0x2ef33a5?, 0x28?)\n\tvendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 +0x9a\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7c6a688, 0xc0001ac000}, 0x735f76c?, 0xc004063100?, 0x256a967?)\n\tvendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 +0x4a\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x7ca63f8?, 0xc003e31200?, 0xc002cd80d0?)\n\tvendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 +0x50\nk8s.io/kubernetes/test/e2e/framework/statefulset.Scale({0x7ca63f8?, 0xc003e31200}, 0x0?, 0x0)\n\ttest/e2e/framework/statefulset/rest.go:153 +0x22d\nk8s.io/kubernetes/test/e2e/framework/statefulset.DeleteAllStatefulSets({0x7ca63f8, 0xc003e31200}, {0xc002fd0190, 0x10})\n\ttest/e2e/framework/statefulset/rest.go:83 +0x1f7\nk8s.io/kubernetes/test/e2e/apps.glob..func10.2.2()\n\ttest/e2e/apps/statefulset.go:127 +0x1b2"} ( Your test failed. Ginkgo panics to prevent subsequent assertions from running. Normally Ginkgo rescues this panic so you shouldn't see it. But, if you make an assertion in a goroutine, Ginkgo can't capture the panic. To circumvent this, you should call defer GinkgoRecover() at the top of the goroutine that caused this panic. ) goroutine 1129 [running]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.logPanic({0x6ea5bc0?, 0xc0018436c0}) vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:75 +0x99 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash({0x0, 0x0, 0xc0018436c0?}) vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:49 +0x75 panic({0x6ea5bc0, 0xc0018436c0}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1() test/e2e/framework/ginkgowrapper/wrapper.go:73 +0x7d panic({0x6ea7de0, 0xc0008b0cb0}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2.Fail({0xc00078e0f0, 0xec}, {0xc004062c70?, 0xc004062c80?, 0x0?}) vendor/github.com/onsi/ginkgo/v2/core_dsl.go:335 +0x225 k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail({0xc00078e0f0, 0xec}, {0xc004062d50?, 0x735f76c?, 0xc004062d70?}) test/e2e/framework/ginkgowrapper/wrapper.go:77 +0x197 k8s.io/kubernetes/test/e2e/framework.Fail({0xc00008ab60, 0xd7}, {0xc004062de8?, 0xc00008ab60?, 0xc004062e10?}) test/e2e/framework/log.go:63 +0x145 k8s.io/kubernetes/test/e2e/framework.ExpectNoErrorWithOffset(0x1, {0x7c388e0, 0xc001b986c0}, {0x0?, 0xc0029dd210?, 0x10?}) test/e2e/framework/expect.go:76 +0x267 k8s.io/kubernetes/test/e2e/framework.ExpectNoError(...) test/e2e/framework/expect.go:43 k8s.io/kubernetes/test/e2e/framework/statefulset.GetPodList({0x7ca63f8, 0xc003e31200}, 0xc000a7f400) test/e2e/framework/statefulset/rest.go:68 +0x153 k8s.io/kubernetes/test/e2e/framework/statefulset.Scale.func2() test/e2e/framework/statefulset/rest.go:154 +0x35 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2683cf1, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 +0x1b k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7c6a688?, 0xc0001ac000?}, 0x7ca63f8?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 +0x57 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7c6a688, 0xc0001ac000}, 0xc0031089f0, 0x2ef480a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 +0x10c k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7c6a688, 0xc0001ac000}, 0xb0?, 0x2ef33a5?, 0x28?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 +0x9a k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7c6a688, 0xc0001ac000}, 0x735f76c?, 0xc004063100?, 0x256a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 +0x4a k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x7ca63f8?, 0xc003e31200?, 0xc002cd80d0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 +0x50 k8s.io/kubernetes/test/e2e/framework/statefulset.Scale({0x7ca63f8?, 0xc003e31200}, 0x0?, 0x0) test/e2e/framework/statefulset/rest.go:153 +0x22d k8s.io/kubernetes/test/e2e/framework/statefulset.DeleteAllStatefulSets({0x7ca63f8, 0xc003e31200}, {0xc002fd0190, 0x10}) test/e2e/framework/statefulset/rest.go:83 +0x1f7 k8s.io/kubernetes/test/e2e/apps.glob..func10.2.2() test/e2e/apps/statefulset.go:127 +0x1b2 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:605 +0x8d created by k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:593 +0x60c {"msg":"FAILED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should implement legacy replacement when the update strategy is OnDelete","completed":5,"skipped":68,"failed":1,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should implement legacy replacement when the update strategy is OnDelete"]} [BeforeEach] [sig-apps] StatefulSet test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/21/23 13:15:36.625�[0m Jan 21 13:15:36.625: INFO: >>> kubeConfig: /root/.kube/config �[1mSTEP:�[0m Building a namespace api object, basename statefulset �[38;5;243m01/21/23 13:15:36.626�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/21/23 13:15:36.959�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/21/23 13:15:37.181�[0m [BeforeEach] [sig-apps] StatefulSet test/e2e/apps/statefulset.go:96 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] test/e2e/apps/statefulset.go:111 �[1mSTEP:�[0m Creating service test in namespace statefulset-6971 �[38;5;243m01/21/23 13:15:37.401�[0m [It] should implement legacy replacement when the update strategy is OnDelete test/e2e/apps/statefulset.go:507 �[1mSTEP:�[0m Creating a new StatefulSet �[38;5;243m01/21/23 13:15:37.513�[0m Jan 21 13:15:37.739: INFO: Found 1 stateful pods, waiting for 3 Jan 21 13:15:47.854: INFO: Found 2 stateful pods, waiting for 3 Jan 21 13:15:57.852: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 21 13:15:57.852: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 21 13:15:57.852: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true �[1mSTEP:�[0m Restoring Pods to the current revision �[38;5;243m01/21/23 13:15:58.204�[0m Jan 21 13:15:58.685: INFO: Found 1 stateful pods, waiting for 3 Jan 21 13:16:08.796: INFO: Found 2 stateful pods, waiting for 3 Jan 21 13:16:18.798: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 21 13:16:18.798: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 21 13:16:18.798: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true �[1mSTEP:�[0m Updating stateful set template: update image from registry.k8s.io/e2e-test-images/httpd:2.4.38-2 to registry.k8s.io/e2e-test-images/httpd:2.4.39-2 �[38;5;243m01/21/23 13:16:19.022�[0m Jan 21 13:16:19.268: INFO: Updating stateful set ss2 �[1mSTEP:�[0m Creating a new revision �[38;5;243m01/21/23 13:16:19.268�[0m �[1mSTEP:�[0m Recreating Pods at the new revision �[38;5;243m01/21/23 13:16:19.494�[0m Jan 21 13:16:20.010: INFO: Found 1 stateful pods, waiting for 3 Jan 21 13:16:30.122: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 21 13:16:30.122: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 21 13:16:30.122: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Jan 21 13:16:40.127: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 21 13:16:40.127: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 21 13:16:40.127: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] test/e2e/apps/statefulset.go:122 Jan 21 13:16:40.355: INFO: Deleting all statefulset in ns statefulset-6971 Jan 21 13:16:40.467: INFO: Scaling statefulset ss2 to 0 Jan 21 13:17:10.938: INFO: Unexpected error: <*url.Error | 0xc001b986c0>: { Op: "Get", URL: "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces/statefulset-6971/pods?labelSelector=baz%3Dblah%2Cfoo%3Dbar", Err: <*net.OpError | 0xc002bc7130>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0029436b0>{ IP: [52, 28, 228, 130], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc00276d260>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Jan 21 13:17:10.938: FAIL: Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces/statefulset-6971/pods?labelSelector=baz%3Dblah%2Cfoo%3Dbar": dial tcp 52.28.228.130:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework/statefulset.GetPodList({0x7ca63f8, 0xc003e31200}, 0xc000a7f400) test/e2e/framework/statefulset/rest.go:68 +0x153 k8s.io/kubernetes/test/e2e/framework/statefulset.Scale.func2() test/e2e/framework/statefulset/rest.go:154 +0x35 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2683cf1, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 +0x1b k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7c6a688?, 0xc0001ac000?}, 0x7ca63f8?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 +0x57 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7c6a688, 0xc0001ac000}, 0xc0031089f0, 0x2ef480a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 +0x10c k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7c6a688, 0xc0001ac000}, 0xb0?, 0x2ef33a5?, 0x28?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 +0x9a k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7c6a688, 0xc0001ac000}, 0x735f76c?, 0xc004063100?, 0x256a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 +0x4a k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x7ca63f8?, 0xc003e31200?, 0xc002cd80d0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 +0x50 k8s.io/kubernetes/test/e2e/framework/statefulset.Scale({0x7ca63f8?, 0xc003e31200}, 0x0?, 0x0) test/e2e/framework/statefulset/rest.go:153 +0x22d k8s.io/kubernetes/test/e2e/framework/statefulset.DeleteAllStatefulSets({0x7ca63f8, 0xc003e31200}, {0xc002fd0190, 0x10}) test/e2e/framework/statefulset/rest.go:83 +0x1f7 k8s.io/kubernetes/test/e2e/apps.glob..func10.2.2() test/e2e/apps/statefulset.go:127 +0x1b2 E0121 13:17:10.938486 6850 runtime.go:79] Observed a panic: ginkgowrapper.FailurePanic{Message:"Jan 21 13:17:10.938: Get \"https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces/statefulset-6971/pods?labelSelector=baz%3Dblah%2Cfoo%3Dbar\": dial tcp 52.28.228.130:443: connect: connection refused", Filename:"test/e2e/framework/statefulset/rest.go", Line:68, FullStackTrace:"k8s.io/kubernetes/test/e2e/framework/statefulset.GetPodList({0x7ca63f8, 0xc003e31200}, 0xc000a7f400)\n\ttest/e2e/framework/statefulset/rest.go:68 +0x153\nk8s.io/kubernetes/test/e2e/framework/statefulset.Scale.func2()\n\ttest/e2e/framework/statefulset/rest.go:154 +0x35\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2683cf1, 0x0})\n\tvendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 +0x1b\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7c6a688?, 0xc0001ac000?}, 0x7ca63f8?)\n\tvendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 +0x57\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7c6a688, 0xc0001ac000}, 0xc0031089f0, 0x2ef480a?)\n\tvendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 +0x10c\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7c6a688, 0xc0001ac000}, 0xb0?, 0x2ef33a5?, 0x28?)\n\tvendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 +0x9a\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7c6a688, 0xc0001ac000}, 0x735f76c?, 0xc004063100?, 0x256a967?)\n\tvendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 +0x4a\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x7ca63f8?, 0xc003e31200?, 0xc002cd80d0?)\n\tvendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 +0x50\nk8s.io/kubernetes/test/e2e/framework/statefulset.Scale({0x7ca63f8?, 0xc003e31200}, 0x0?, 0x0)\n\ttest/e2e/framework/statefulset/rest.go:153 +0x22d\nk8s.io/kubernetes/test/e2e/framework/statefulset.DeleteAllStatefulSets({0x7ca63f8, 0xc003e31200}, {0xc002fd0190, 0x10})\n\ttest/e2e/framework/statefulset/rest.go:83 +0x1f7\nk8s.io/kubernetes/test/e2e/apps.glob..func10.2.2()\n\ttest/e2e/apps/statefulset.go:127 +0x1b2"} ( Your test failed. Ginkgo panics to prevent subsequent assertions from running. Normally Ginkgo rescues this panic so you shouldn't see it. But, if you make an assertion in a goroutine, Ginkgo can't capture the panic. To circumvent this, you should call defer GinkgoRecover() at the top of the goroutine that caused this panic. ) goroutine 1129 [running]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.logPanic({0x6ea5bc0?, 0xc0018436c0}) vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:75 +0x99 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash({0x0, 0x0, 0xc0018436c0?}) vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:49 +0x75 panic({0x6ea5bc0, 0xc0018436c0}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1() test/e2e/framework/ginkgowrapper/wrapper.go:73 +0x7d panic({0x6ea7de0, 0xc0008b0cb0}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2.Fail({0xc00078e0f0, 0xec}, {0xc004062c70?, 0xc004062c80?, 0x0?}) vendor/github.com/onsi/ginkgo/v2/core_dsl.go:335 +0x225 k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail({0xc00078e0f0, 0xec}, {0xc004062d50?, 0x735f76c?, 0xc004062d70?}) test/e2e/framework/ginkgowrapper/wrapper.go:77 +0x197 k8s.io/kubernetes/test/e2e/framework.Fail({0xc00008ab60, 0xd7}, {0xc004062de8?, 0xc00008ab60?, 0xc004062e10?}) test/e2e/framework/log.go:63 +0x145 k8s.io/kubernetes/test/e2e/framework.ExpectNoErrorWithOffset(0x1, {0x7c388e0, 0xc001b986c0}, {0x0?, 0xc0029dd210?, 0x10?}) test/e2e/framework/expect.go:76 +0x267 k8s.io/kubernetes/test/e2e/framework.ExpectNoError(...) test/e2e/framework/expect.go:43 k8s.io/kubernetes/test/e2e/framework/statefulset.GetPodList({0x7ca63f8, 0xc003e31200}, 0xc000a7f400) test/e2e/framework/statefulset/rest.go:68 +0x153 k8s.io/kubernetes/test/e2e/framework/statefulset.Scale.func2() test/e2e/framework/statefulset/rest.go:154 +0x35 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2683cf1, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 +0x1b k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7c6a688?, 0xc0001ac000?}, 0x7ca63f8?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 +0x57 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7c6a688, 0xc0001ac000}, 0xc0031089f0, 0x2ef480a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 +0x10c k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7c6a688, 0xc0001ac000}, 0xb0?, 0x2ef33a5?, 0x28?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 +0x9a k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7c6a688, 0xc0001ac000}, 0x735f76c?, 0xc004063100?, 0x256a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 +0x4a k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x7ca63f8?, 0xc003e31200?, 0xc002cd80d0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 +0x50 k8s.io/kubernetes/test/e2e/framework/statefulset.Scale({0x7ca63f8?, 0xc003e31200}, 0x0?, 0x0) test/e2e/framework/statefulset/rest.go:153 +0x22d k8s.io/kubernetes/test/e2e/framework/statefulset.DeleteAllStatefulSets({0x7ca63f8, 0xc003e31200}, {0xc002fd0190, 0x10}) test/e2e/framework/statefulset/rest.go:83 +0x1f7 k8s.io/kubernetes/test/e2e/apps.glob..func10.2.2() test/e2e/apps/statefulset.go:127 +0x1b2 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:605 +0x8d created by k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:593 +0x60c [AfterEach] [sig-apps] StatefulSet test/e2e/framework/framework.go:187 �[1mSTEP:�[0m Collecting events from namespace "statefulset-6971". �[38;5;243m01/21/23 13:17:10.938�[0m Jan 21 13:17:11.068: INFO: Unexpected error: failed to list events in namespace "statefulset-6971": <*url.Error | 0xc002943c50>: { Op: "Get", URL: "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces/statefulset-6971/events", Err: <*net.OpError | 0xc0027fc780>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc001b991a0>{ IP: [52, 28, 228, 130], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc000d27e60>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Jan 21 13:17:11.068: FAIL: failed to list events in namespace "statefulset-6971": Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces/statefulset-6971/events": dial tcp 52.28.228.130:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework.dumpEventsInNamespace(0xc0025f3590, {0xc002fd0190, 0x10}) test/e2e/framework/util.go:901 +0x191 k8s.io/kubernetes/test/e2e/framework.DumpAllNamespaceInfo({0x7ca63f8, 0xc003e31200}, {0xc002fd0190, 0x10}) test/e2e/framework/util.go:919 +0x8d k8s.io/kubernetes/test/e2e/framework.NewFramework.func1(0xc000de49a0, 0x2?) test/e2e/framework/framework.go:181 +0x8b k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc000de49a0) test/e2e/framework/framework.go:435 +0x21d �[1mSTEP:�[0m Destroying namespace "statefulset-6971" for this suite. �[38;5;243m01/21/23 13:17:11.068�[0m Jan 21 13:17:11.193: FAIL: Couldn't delete ns: "statefulset-6971": Delete "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces/statefulset-6971": dial tcp 52.28.228.130:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces/statefulset-6971", Err:(*net.OpError)(0xc0027fcc30)}) Full Stack Trace panic({0x6ea5bc0, 0xc000d75280}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1() test/e2e/framework/ginkgowrapper/wrapper.go:73 +0x7d panic({0x6ea7de0, 0xc000c79e30}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail({0xc0026e3c00, 0x100}, {0xc0025f3048?, 0x735f76c?, 0xc0025f3068?}) test/e2e/framework/ginkgowrapper/wrapper.go:77 +0x197 k8s.io/kubernetes/test/e2e/framework.Fail({0xc0003c0780, 0xeb}, {0xc0025f30e0?, 0xc00162dbc0?, 0xc0025f3108?}) test/e2e/framework/log.go:63 +0x145 k8s.io/kubernetes/test/e2e/framework.ExpectNoErrorWithOffset(0x1, {0x7c388e0, 0xc002943c50}, {0xc000d27ee0?, 0x0?, 0x0?}) test/e2e/framework/expect.go:76 +0x267 k8s.io/kubernetes/test/e2e/framework.ExpectNoError(...) test/e2e/framework/expect.go:43 k8s.io/kubernetes/test/e2e/framework.dumpEventsInNamespace(0xc0025f3590, {0xc002fd0190, 0x10}) test/e2e/framework/util.go:901 +0x191 k8s.io/kubernetes/test/e2e/framework.DumpAllNamespaceInfo({0x7ca63f8, 0xc003e31200}, {0xc002fd0190, 0x10}) test/e2e/framework/util.go:919 +0x8d k8s.io/kubernetes/test/e2e/framework.NewFramework.func1(0xc000de49a0, 0x2?) test/e2e/framework/framework.go:181 +0x8b k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc000de49a0) test/e2e/framework/framework.go:435 +0x21d
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-apps\]\sStatefulSet\sBasic\sStatefulSet\sfunctionality\s\[StatefulSetBasic\]\sshould\sperform\srolling\supdates\sand\sroll\sbacks\sof\stemplate\smodifications\s\[Conformance\]$'
test/e2e/apps/wait.go:88 k8s.io/kubernetes/test/e2e/apps.waitForStatus({0x7ca63f8, 0xc0027fb680}, 0xc001618500) test/e2e/apps/wait.go:88 +0xb2 k8s.io/kubernetes/test/e2e/apps.rollbackTest({0x7ca63f8, 0xc0027fb680}, {0xc00365c590, 0xf}, 0xc001615900) test/e2e/apps/statefulset.go:1568 +0x172 k8s.io/kubernetes/test/e2e/apps.glob..func10.2.7() test/e2e/apps/statefulset.go:307 +0xe6from junit_01.xml
{"msg":"FAILED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","completed":6,"skipped":33,"failed":1,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]"]} [BeforeEach] [sig-apps] StatefulSet test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/21/23 13:16:43.664�[0m Jan 21 13:16:43.664: INFO: >>> kubeConfig: /root/.kube/config �[1mSTEP:�[0m Building a namespace api object, basename statefulset �[38;5;243m01/21/23 13:16:43.666�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/21/23 13:16:44.005�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/21/23 13:16:44.233�[0m [BeforeEach] [sig-apps] StatefulSet test/e2e/apps/statefulset.go:96 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] test/e2e/apps/statefulset.go:111 �[1mSTEP:�[0m Creating service test in namespace statefulset-701 �[38;5;243m01/21/23 13:16:44.458�[0m [It] should perform rolling updates and roll backs of template modifications [Conformance] test/e2e/apps/statefulset.go:304 �[1mSTEP:�[0m Creating a new StatefulSet �[38;5;243m01/21/23 13:16:44.572�[0m Jan 21 13:16:44.807: INFO: Found 1 stateful pods, waiting for 3 Jan 21 13:16:54.921: INFO: Found 2 stateful pods, waiting for 3 Jan 21 13:17:04.929: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 21 13:17:04.929: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 21 13:17:04.929: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Jan 21 13:17:05.155: FAIL: Failed waiting for state update: Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/apis/apps/v1/namespaces/statefulset-701/statefulsets/ss2": dial tcp 52.28.228.130:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/apps.waitForStatus({0x7ca63f8, 0xc0027fb680}, 0xc001618500) test/e2e/apps/wait.go:88 +0xb2 k8s.io/kubernetes/test/e2e/apps.rollbackTest({0x7ca63f8, 0xc0027fb680}, {0xc00365c590, 0xf}, 0xc001615900) test/e2e/apps/statefulset.go:1568 +0x172 k8s.io/kubernetes/test/e2e/apps.glob..func10.2.7() test/e2e/apps/statefulset.go:307 +0xe6 [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] test/e2e/apps/statefulset.go:122 Jan 21 13:17:05.282: INFO: Deleting all statefulset in ns statefulset-701 Jan 21 13:17:05.406: INFO: Unexpected error: <*url.Error | 0xc00375a090>: { Op: "Get", URL: "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/apis/apps/v1/namespaces/statefulset-701/statefulsets", Err: <*net.OpError | 0xc0037522d0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc00375a060>{ IP: [52, 28, 228, 130], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc003744540>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Jan 21 13:17:05.407: FAIL: Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/apis/apps/v1/namespaces/statefulset-701/statefulsets": dial tcp 52.28.228.130:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework/statefulset.DeleteAllStatefulSets({0x7ca63f8, 0xc0027fb680}, {0xc00365c590, 0xf}) test/e2e/framework/statefulset/rest.go:75 +0x133 k8s.io/kubernetes/test/e2e/apps.glob..func10.2.2() test/e2e/apps/statefulset.go:127 +0x1b2 [AfterEach] [sig-apps] StatefulSet test/e2e/framework/framework.go:187 �[1mSTEP:�[0m Collecting events from namespace "statefulset-701". �[38;5;243m01/21/23 13:17:05.407�[0m Jan 21 13:17:05.532: INFO: Unexpected error: failed to list events in namespace "statefulset-701": <*url.Error | 0xc00375ab40>: { Op: "Get", URL: "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces/statefulset-701/events", Err: <*net.OpError | 0xc0037526e0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc003858cf0>{ IP: [52, 28, 228, 130], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc003744980>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Jan 21 13:17:05.532: FAIL: failed to list events in namespace "statefulset-701": Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces/statefulset-701/events": dial tcp 52.28.228.130:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework.dumpEventsInNamespace(0xc003bcf590, {0xc00365c590, 0xf}) test/e2e/framework/util.go:901 +0x191 k8s.io/kubernetes/test/e2e/framework.DumpAllNamespaceInfo({0x7ca63f8, 0xc0027fb680}, {0xc00365c590, 0xf}) test/e2e/framework/util.go:919 +0x8d k8s.io/kubernetes/test/e2e/framework.NewFramework.func1(0xc000d118c0, 0x2?) test/e2e/framework/framework.go:181 +0x8b k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc000d118c0) test/e2e/framework/framework.go:435 +0x21d �[1mSTEP:�[0m Destroying namespace "statefulset-701" for this suite. �[38;5;243m01/21/23 13:17:05.533�[0m Jan 21 13:17:05.656: FAIL: Couldn't delete ns: "statefulset-701": Delete "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces/statefulset-701": dial tcp 52.28.228.130:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces/statefulset-701", Err:(*net.OpError)(0xc003752a00)}) Full Stack Trace panic({0x6ea5bc0, 0xc003743380}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1() test/e2e/framework/ginkgowrapper/wrapper.go:73 +0x7d panic({0x6ea7de0, 0xc000688230}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail({0xc00374db00, 0xfe}, {0xc003bcf048?, 0x735f76c?, 0xc003bcf068?}) test/e2e/framework/ginkgowrapper/wrapper.go:77 +0x197 k8s.io/kubernetes/test/e2e/framework.Fail({0xc000fe0d20, 0xe9}, {0xc003bcf0e0?, 0xc0036fdc80?, 0xc003bcf108?}) test/e2e/framework/log.go:63 +0x145 k8s.io/kubernetes/test/e2e/framework.ExpectNoErrorWithOffset(0x1, {0x7c388e0, 0xc00375ab40}, {0xc0037449c0?, 0x0?, 0x0?}) test/e2e/framework/expect.go:76 +0x267 k8s.io/kubernetes/test/e2e/framework.ExpectNoError(...) test/e2e/framework/expect.go:43 k8s.io/kubernetes/test/e2e/framework.dumpEventsInNamespace(0xc003bcf590, {0xc00365c590, 0xf}) test/e2e/framework/util.go:901 +0x191 k8s.io/kubernetes/test/e2e/framework.DumpAllNamespaceInfo({0x7ca63f8, 0xc0027fb680}, {0xc00365c590, 0xf}) test/e2e/framework/util.go:919 +0x8d k8s.io/kubernetes/test/e2e/framework.NewFramework.func1(0xc000d118c0, 0x2?) test/e2e/framework/framework.go:181 +0x8b k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc000d118c0) test/e2e/framework/framework.go:435 +0x21d
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-apps\]\sStatefulSet\sBasic\sStatefulSet\sfunctionality\s\[StatefulSetBasic\]\sshould\sprovide\sbasic\sidentity$'
test/e2e/framework/statefulset/rest.go:68 k8s.io/kubernetes/test/e2e/framework/statefulset.GetPodList({0x7ca63f8, 0xc00042b500}, 0xc000f85400) test/e2e/framework/statefulset/rest.go:68 +0x153 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning.func1() test/e2e/framework/statefulset/wait.go:37 +0x4a k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2683cf1, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 +0x1b k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7c6a688?, 0xc000132000?}, 0x256a61f?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 +0x57 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7c6a688, 0xc000132000}, 0xc0029ad188, 0x2ef480a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 +0x10c k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7c6a688, 0xc000132000}, 0xc8?, 0x2ef33a5?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 +0x9a k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7c6a688, 0xc000132000}, 0xb?, 0xc00022fe18?, 0x256a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 +0x4a k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x73613a6?, 0x4?, 0x735f76c?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 +0x50 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning({0x7ca63f8?, 0xc00042b500}, 0x2, 0x1, 0xc000f85400) test/e2e/framework/statefulset/wait.go:35 +0xbd k8s.io/kubernetes/test/e2e/framework/statefulset.Saturate({0x7ca63f8, 0xc00042b500}, 0xc000f85400) test/e2e/framework/statefulset/wait.go:179 +0xab k8s.io/kubernetes/test/e2e/apps.glob..func10.2.3() test/e2e/apps/statefulset.go:142 +0x1fffrom junit_01.xml
E0121 13:17:07.942492 6836 runtime.go:79] Observed a panic: ginkgowrapper.FailurePanic{Message:"Jan 21 13:17:07.942: Get \"https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces/statefulset-173/pods?labelSelector=baz%3Dblah%2Cfoo%3Dbar\": dial tcp 52.28.228.130:443: connect: connection refused", Filename:"test/e2e/framework/statefulset/rest.go", Line:68, FullStackTrace:"k8s.io/kubernetes/test/e2e/framework/statefulset.GetPodList({0x7ca63f8, 0xc00042b500}, 0xc000f85400)\n\ttest/e2e/framework/statefulset/rest.go:68 +0x153\nk8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning.func1()\n\ttest/e2e/framework/statefulset/wait.go:37 +0x4a\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2683cf1, 0x0})\n\tvendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 +0x1b\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7c6a688?, 0xc000132000?}, 0x256a61f?)\n\tvendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 +0x57\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7c6a688, 0xc000132000}, 0xc0029ad188, 0x2ef480a?)\n\tvendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 +0x10c\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7c6a688, 0xc000132000}, 0xc8?, 0x2ef33a5?, 0x20?)\n\tvendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 +0x9a\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7c6a688, 0xc000132000}, 0xb?, 0xc00022fe18?, 0x256a967?)\n\tvendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 +0x4a\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x73613a6?, 0x4?, 0x735f76c?)\n\tvendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 +0x50\nk8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning({0x7ca63f8?, 0xc00042b500}, 0x2, 0x1, 0xc000f85400)\n\ttest/e2e/framework/statefulset/wait.go:35 +0xbd\nk8s.io/kubernetes/test/e2e/framework/statefulset.Saturate({0x7ca63f8, 0xc00042b500}, 0xc000f85400)\n\ttest/e2e/framework/statefulset/wait.go:179 +0xab\nk8s.io/kubernetes/test/e2e/apps.glob..func10.2.3()\n\ttest/e2e/apps/statefulset.go:142 +0x1ff"} ( Your test failed. Ginkgo panics to prevent subsequent assertions from running. Normally Ginkgo rescues this panic so you shouldn't see it. But, if you make an assertion in a goroutine, Ginkgo can't capture the panic. To circumvent this, you should call defer GinkgoRecover() at the top of the goroutine that caused this panic. ) goroutine 1396 [running]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.logPanic({0x6ea5bc0?, 0xc003b00740}) vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:75 +0x99 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash({0x0, 0x0, 0xc003b00740?}) vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:49 +0x75 panic({0x6ea5bc0, 0xc003b00740}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1() test/e2e/framework/ginkgowrapper/wrapper.go:73 +0x7d panic({0x6ea7de0, 0xc00062e540}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2.Fail({0xc0007665a0, 0xeb}, {0xc000ce54c8?, 0xc000ce54d8?, 0x0?}) vendor/github.com/onsi/ginkgo/v2/core_dsl.go:335 +0x225 k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail({0xc0007665a0, 0xeb}, {0xc000ce55a8?, 0x735f76c?, 0xc000ce55c8?}) test/e2e/framework/ginkgowrapper/wrapper.go:77 +0x197 k8s.io/kubernetes/test/e2e/framework.Fail({0xc000f3e0e0, 0xd6}, {0xc000ce5640?, 0xc000f3e0e0?, 0xc000ce5668?}) test/e2e/framework/log.go:63 +0x145 k8s.io/kubernetes/test/e2e/framework.ExpectNoErrorWithOffset(0x1, {0x7c388e0, 0xc003966a20}, {0x0?, 0xc0038aa7f0?, 0x10?}) test/e2e/framework/expect.go:76 +0x267 k8s.io/kubernetes/test/e2e/framework.ExpectNoError(...) test/e2e/framework/expect.go:43 k8s.io/kubernetes/test/e2e/framework/statefulset.GetPodList({0x7ca63f8, 0xc00042b500}, 0xc000f85400) test/e2e/framework/statefulset/rest.go:68 +0x153 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning.func1() test/e2e/framework/statefulset/wait.go:37 +0x4a k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2683cf1, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 +0x1b k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7c6a688?, 0xc000132000?}, 0x256a61f?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 +0x57 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7c6a688, 0xc000132000}, 0xc0029ad188, 0x2ef480a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 +0x10c k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7c6a688, 0xc000132000}, 0xc8?, 0x2ef33a5?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 +0x9a k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7c6a688, 0xc000132000}, 0xb?, 0xc00022fe18?, 0x256a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 +0x4a k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x73613a6?, 0x4?, 0x735f76c?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 +0x50 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning({0x7ca63f8?, 0xc00042b500}, 0x2, 0x1, 0xc000f85400) test/e2e/framework/statefulset/wait.go:35 +0xbd k8s.io/kubernetes/test/e2e/framework/statefulset.Saturate({0x7ca63f8, 0xc00042b500}, 0xc000f85400) test/e2e/framework/statefulset/wait.go:179 +0xab k8s.io/kubernetes/test/e2e/apps.glob..func10.2.3() test/e2e/apps/statefulset.go:142 +0x1ff k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:605 +0x8d created by k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:593 +0x60c {"msg":"FAILED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should provide basic identity","completed":6,"skipped":72,"failed":1,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should provide basic identity"]} [BeforeEach] [sig-apps] StatefulSet test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/21/23 13:16:25.036�[0m Jan 21 13:16:25.036: INFO: >>> kubeConfig: /root/.kube/config �[1mSTEP:�[0m Building a namespace api object, basename statefulset �[38;5;243m01/21/23 13:16:25.037�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/21/23 13:16:25.387�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/21/23 13:16:25.611�[0m [BeforeEach] [sig-apps] StatefulSet test/e2e/apps/statefulset.go:96 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] test/e2e/apps/statefulset.go:111 �[1mSTEP:�[0m Creating service test in namespace statefulset-173 �[38;5;243m01/21/23 13:16:25.835�[0m [It] should provide basic identity test/e2e/apps/statefulset.go:132 �[1mSTEP:�[0m Creating statefulset ss in namespace statefulset-173 �[38;5;243m01/21/23 13:16:25.956�[0m Jan 21 13:16:26.070: INFO: Default storage class: "kops-csi-1-21" �[1mSTEP:�[0m Saturating stateful set ss �[38;5;243m01/21/23 13:16:26.184�[0m Jan 21 13:16:26.184: INFO: Waiting for stateful pod at index 0 to enter Running Jan 21 13:16:26.296: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Pending - Ready=false Jan 21 13:16:36.419: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Pending - Ready=false Jan 21 13:16:46.422: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Pending - Ready=false Jan 21 13:16:56.408: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jan 21 13:16:56.408: INFO: Resuming stateful pod at index 0 Jan 21 13:16:56.519: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/ec436c25-998b-11ed-a697-56ea552f9d82/kubectl --server=https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=statefulset-173 exec ss-0 -- /bin/sh -x -c dd if=/dev/zero of=/data/statefulset-continue bs=1 count=1 conv=fsync' Jan 21 13:16:57.705: INFO: stderr: "+ dd 'if=/dev/zero' 'of=/data/statefulset-continue' 'bs=1' 'count=1' 'conv=fsync'\n1+0 records in\n1+0 records out\n" Jan 21 13:16:57.705: INFO: stdout: "" Jan 21 13:16:57.705: INFO: Resumed pod ss-0 Jan 21 13:16:57.705: INFO: Waiting for stateful pod at index 1 to enter Running Jan 21 13:16:57.818: INFO: Found 1 stateful pods, waiting for 2 Jan 21 13:17:07.941: INFO: Unexpected error: <*url.Error | 0xc003966a20>: { Op: "Get", URL: "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces/statefulset-173/pods?labelSelector=baz%3Dblah%2Cfoo%3Dbar", Err: <*net.OpError | 0xc0036903c0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0039669f0>{ IP: [52, 28, 228, 130], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc002c1e0a0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Jan 21 13:17:07.942: FAIL: Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces/statefulset-173/pods?labelSelector=baz%3Dblah%2Cfoo%3Dbar": dial tcp 52.28.228.130:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework/statefulset.GetPodList({0x7ca63f8, 0xc00042b500}, 0xc000f85400) test/e2e/framework/statefulset/rest.go:68 +0x153 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning.func1() test/e2e/framework/statefulset/wait.go:37 +0x4a k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2683cf1, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 +0x1b k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7c6a688?, 0xc000132000?}, 0x256a61f?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 +0x57 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7c6a688, 0xc000132000}, 0xc0029ad188, 0x2ef480a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 +0x10c k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7c6a688, 0xc000132000}, 0xc8?, 0x2ef33a5?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 +0x9a k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7c6a688, 0xc000132000}, 0xb?, 0xc00022fe18?, 0x256a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 +0x4a k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x73613a6?, 0x4?, 0x735f76c?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 +0x50 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning({0x7ca63f8?, 0xc00042b500}, 0x2, 0x1, 0xc000f85400) test/e2e/framework/statefulset/wait.go:35 +0xbd k8s.io/kubernetes/test/e2e/framework/statefulset.Saturate({0x7ca63f8, 0xc00042b500}, 0xc000f85400) test/e2e/framework/statefulset/wait.go:179 +0xab k8s.io/kubernetes/test/e2e/apps.glob..func10.2.3() test/e2e/apps/statefulset.go:142 +0x1ff E0121 13:17:07.942492 6836 runtime.go:79] Observed a panic: ginkgowrapper.FailurePanic{Message:"Jan 21 13:17:07.942: Get \"https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces/statefulset-173/pods?labelSelector=baz%3Dblah%2Cfoo%3Dbar\": dial tcp 52.28.228.130:443: connect: connection refused", Filename:"test/e2e/framework/statefulset/rest.go", Line:68, FullStackTrace:"k8s.io/kubernetes/test/e2e/framework/statefulset.GetPodList({0x7ca63f8, 0xc00042b500}, 0xc000f85400)\n\ttest/e2e/framework/statefulset/rest.go:68 +0x153\nk8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning.func1()\n\ttest/e2e/framework/statefulset/wait.go:37 +0x4a\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2683cf1, 0x0})\n\tvendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 +0x1b\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7c6a688?, 0xc000132000?}, 0x256a61f?)\n\tvendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 +0x57\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7c6a688, 0xc000132000}, 0xc0029ad188, 0x2ef480a?)\n\tvendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 +0x10c\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7c6a688, 0xc000132000}, 0xc8?, 0x2ef33a5?, 0x20?)\n\tvendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 +0x9a\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7c6a688, 0xc000132000}, 0xb?, 0xc00022fe18?, 0x256a967?)\n\tvendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 +0x4a\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x73613a6?, 0x4?, 0x735f76c?)\n\tvendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 +0x50\nk8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning({0x7ca63f8?, 0xc00042b500}, 0x2, 0x1, 0xc000f85400)\n\ttest/e2e/framework/statefulset/wait.go:35 +0xbd\nk8s.io/kubernetes/test/e2e/framework/statefulset.Saturate({0x7ca63f8, 0xc00042b500}, 0xc000f85400)\n\ttest/e2e/framework/statefulset/wait.go:179 +0xab\nk8s.io/kubernetes/test/e2e/apps.glob..func10.2.3()\n\ttest/e2e/apps/statefulset.go:142 +0x1ff"} ( Your test failed. Ginkgo panics to prevent subsequent assertions from running. Normally Ginkgo rescues this panic so you shouldn't see it. But, if you make an assertion in a goroutine, Ginkgo can't capture the panic. To circumvent this, you should call defer GinkgoRecover() at the top of the goroutine that caused this panic. ) goroutine 1396 [running]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.logPanic({0x6ea5bc0?, 0xc003b00740}) vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:75 +0x99 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash({0x0, 0x0, 0xc003b00740?}) vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:49 +0x75 panic({0x6ea5bc0, 0xc003b00740}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1() test/e2e/framework/ginkgowrapper/wrapper.go:73 +0x7d panic({0x6ea7de0, 0xc00062e540}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2.Fail({0xc0007665a0, 0xeb}, {0xc000ce54c8?, 0xc000ce54d8?, 0x0?}) vendor/github.com/onsi/ginkgo/v2/core_dsl.go:335 +0x225 k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail({0xc0007665a0, 0xeb}, {0xc000ce55a8?, 0x735f76c?, 0xc000ce55c8?}) test/e2e/framework/ginkgowrapper/wrapper.go:77 +0x197 k8s.io/kubernetes/test/e2e/framework.Fail({0xc000f3e0e0, 0xd6}, {0xc000ce5640?, 0xc000f3e0e0?, 0xc000ce5668?}) test/e2e/framework/log.go:63 +0x145 k8s.io/kubernetes/test/e2e/framework.ExpectNoErrorWithOffset(0x1, {0x7c388e0, 0xc003966a20}, {0x0?, 0xc0038aa7f0?, 0x10?}) test/e2e/framework/expect.go:76 +0x267 k8s.io/kubernetes/test/e2e/framework.ExpectNoError(...) test/e2e/framework/expect.go:43 k8s.io/kubernetes/test/e2e/framework/statefulset.GetPodList({0x7ca63f8, 0xc00042b500}, 0xc000f85400) test/e2e/framework/statefulset/rest.go:68 +0x153 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning.func1() test/e2e/framework/statefulset/wait.go:37 +0x4a k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2683cf1, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 +0x1b k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7c6a688?, 0xc000132000?}, 0x256a61f?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 +0x57 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7c6a688, 0xc000132000}, 0xc0029ad188, 0x2ef480a?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 +0x10c k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7c6a688, 0xc000132000}, 0xc8?, 0x2ef33a5?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 +0x9a k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7c6a688, 0xc000132000}, 0xb?, 0xc00022fe18?, 0x256a967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 +0x4a k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x73613a6?, 0x4?, 0x735f76c?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 +0x50 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning({0x7ca63f8?, 0xc00042b500}, 0x2, 0x1, 0xc000f85400) test/e2e/framework/statefulset/wait.go:35 +0xbd k8s.io/kubernetes/test/e2e/framework/statefulset.Saturate({0x7ca63f8, 0xc00042b500}, 0xc000f85400) test/e2e/framework/statefulset/wait.go:179 +0xab k8s.io/kubernetes/test/e2e/apps.glob..func10.2.3() test/e2e/apps/statefulset.go:142 +0x1ff k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:605 +0x8d created by k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:593 +0x60c [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] test/e2e/apps/statefulset.go:122 Jan 21 13:17:08.066: INFO: Deleting all statefulset in ns statefulset-173 Jan 21 13:17:08.190: INFO: Unexpected error: <*url.Error | 0xc003967980>: { Op: "Get", URL: "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/apis/apps/v1/namespaces/statefulset-173/statefulsets", Err: <*net.OpError | 0xc003690a00>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc003c19b30>{ IP: [52, 28, 228, 130], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc002c1e440>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Jan 21 13:17:08.190: FAIL: Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/apis/apps/v1/namespaces/statefulset-173/statefulsets": dial tcp 52.28.228.130:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework/statefulset.DeleteAllStatefulSets({0x7ca63f8, 0xc00042b500}, {0xc003a3b570, 0xf}) test/e2e/framework/statefulset/rest.go:75 +0x133 k8s.io/kubernetes/test/e2e/apps.glob..func10.2.2() test/e2e/apps/statefulset.go:127 +0x1b2 [AfterEach] [sig-apps] StatefulSet test/e2e/framework/framework.go:187 �[1mSTEP:�[0m Collecting events from namespace "statefulset-173". �[38;5;243m01/21/23 13:17:08.19�[0m Jan 21 13:17:08.313: INFO: Unexpected error: failed to list events in namespace "statefulset-173": <*url.Error | 0xc0015a8060>: { Op: "Get", URL: "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces/statefulset-173/events", Err: <*net.OpError | 0xc002961db0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc003d6fdd0>{ IP: [52, 28, 228, 130], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc002f167a0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Jan 21 13:17:08.313: FAIL: failed to list events in namespace "statefulset-173": Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces/statefulset-173/events": dial tcp 52.28.228.130:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework.dumpEventsInNamespace(0xc000ce9590, {0xc003a3b570, 0xf}) test/e2e/framework/util.go:901 +0x191 k8s.io/kubernetes/test/e2e/framework.DumpAllNamespaceInfo({0x7ca63f8, 0xc00042b500}, {0xc003a3b570, 0xf}) test/e2e/framework/util.go:919 +0x8d k8s.io/kubernetes/test/e2e/framework.NewFramework.func1(0xc000d84580, 0x2?) test/e2e/framework/framework.go:181 +0x8b k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc000d84580) test/e2e/framework/framework.go:435 +0x21d �[1mSTEP:�[0m Destroying namespace "statefulset-173" for this suite. �[38;5;243m01/21/23 13:17:08.313�[0m Jan 21 13:17:08.434: FAIL: Couldn't delete ns: "statefulset-173": Delete "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces/statefulset-173": dial tcp 52.28.228.130:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces/statefulset-173", Err:(*net.OpError)(0xc0013496d0)}) Full Stack Trace panic({0x6ea5bc0, 0xc003ab2f80}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1() test/e2e/framework/ginkgowrapper/wrapper.go:73 +0x7d panic({0x6ea7de0, 0xc000afebd0}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail({0xc00010ca00, 0xfe}, {0xc000ce9048?, 0x735f76c?, 0xc000ce9068?}) test/e2e/framework/ginkgowrapper/wrapper.go:77 +0x197 k8s.io/kubernetes/test/e2e/framework.Fail({0xc00165a000, 0xe9}, {0xc000ce90e0?, 0xc001612240?, 0xc000ce9108?}) test/e2e/framework/log.go:63 +0x145 k8s.io/kubernetes/test/e2e/framework.ExpectNoErrorWithOffset(0x1, {0x7c388e0, 0xc0015a8060}, {0xc002f167e0?, 0x0?, 0x0?}) test/e2e/framework/expect.go:76 +0x267 k8s.io/kubernetes/test/e2e/framework.ExpectNoError(...) test/e2e/framework/expect.go:43 k8s.io/kubernetes/test/e2e/framework.dumpEventsInNamespace(0xc000ce9590, {0xc003a3b570, 0xf}) test/e2e/framework/util.go:901 +0x191 k8s.io/kubernetes/test/e2e/framework.DumpAllNamespaceInfo({0x7ca63f8, 0xc00042b500}, {0xc003a3b570, 0xf}) test/e2e/framework/util.go:919 +0x8d k8s.io/kubernetes/test/e2e/framework.NewFramework.func1(0xc000d84580, 0x2?) test/e2e/framework/framework.go:181 +0x8b k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc000d84580) test/e2e/framework/framework.go:435 +0x21d
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-auth\]\sServiceAccounts\sServiceAccountIssuerDiscovery\sshould\ssupport\sOIDC\sdiscovery\sof\sservice\saccount\sissuer\s\[Conformance\]$'
test/e2e/framework/framework.go:244 k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc000db14a0) test/e2e/framework/framework.go:244 +0x7bffrom junit_01.xml
{"msg":"FAILED [sig-auth] ServiceAccounts ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance]","completed":6,"skipped":58,"failed":2,"failures":["[sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","[sig-auth] ServiceAccounts ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance]"]} [BeforeEach] [sig-auth] ServiceAccounts test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/21/23 13:17:09.777�[0m Jan 21 13:17:09.777: INFO: >>> kubeConfig: /root/.kube/config �[1mSTEP:�[0m Building a namespace api object, basename svcaccounts �[38;5;243m01/21/23 13:17:09.778�[0m Jan 21 13:17:09.900: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:12.024: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:14.025: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:16.026: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:18.029: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:20.023: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:22.022: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:24.024: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:26.026: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:43.474: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:43.598: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:43.598: INFO: Unexpected error: <*errors.errorString | 0xc00016b920>: { s: "timed out waiting for the condition", } Jan 21 13:17:43.598: FAIL: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc000db14a0) test/e2e/framework/framework.go:244 +0x7bf [AfterEach] [sig-auth] ServiceAccounts test/e2e/framework/framework.go:187 Jan 21 13:17:43.599: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready Jan 21 13:17:43.725: FAIL: All nodes should be ready after test, Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 52.28.228.130:443: connect: connection refused Full Stack Trace
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-auth\]\sServiceAccounts\sshould\srun\sthrough\sthe\slifecycle\sof\sa\sServiceAccount\s\[Conformance\]$'
test/e2e/framework/framework.go:244 k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc000d61a20) test/e2e/framework/framework.go:244 +0x7bffrom junit_01.xml
{"msg":"FAILED [sig-auth] ServiceAccounts should run through the lifecycle of a ServiceAccount [Conformance]","completed":7,"skipped":38,"failed":2,"failures":["[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]","[sig-auth] ServiceAccounts should run through the lifecycle of a ServiceAccount [Conformance]"]} [BeforeEach] [sig-auth] ServiceAccounts test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/21/23 13:17:05.632�[0m Jan 21 13:17:05.632: INFO: >>> kubeConfig: /root/.kube/config �[1mSTEP:�[0m Building a namespace api object, basename svcaccounts �[38;5;243m01/21/23 13:17:05.633�[0m Jan 21 13:17:05.759: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:07.884: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:09.882: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:11.899: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:13.887: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:15.889: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:17.884: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:19.883: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:21.887: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:23.884: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:25.887: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:43.218: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:43.342: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:43.342: INFO: Unexpected error: <*errors.errorString | 0xc0001eb900>: { s: "timed out waiting for the condition", } Jan 21 13:17:43.342: FAIL: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc000d61a20) test/e2e/framework/framework.go:244 +0x7bf [AfterEach] [sig-auth] ServiceAccounts test/e2e/framework/framework.go:187 Jan 21 13:17:43.342: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready Jan 21 13:17:43.464: FAIL: All nodes should be ready after test, Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 52.28.228.130:443: connect: connection refused Full Stack Trace
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-instrumentation\]\sMetricsGrabber\sshould\sgrab\sall\smetrics\sfrom\sAPI\sserver\.$'
test/e2e/framework/framework.go:244 k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc000d47b80) test/e2e/framework/framework.go:244 +0x7bffrom junit_01.xml
{"msg":"FAILED [sig-instrumentation] MetricsGrabber should grab all metrics from API server.","completed":8,"skipped":77,"failed":2,"failures":["External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)] volumes should store data","[sig-instrumentation] MetricsGrabber should grab all metrics from API server."]} [BeforeEach] [sig-instrumentation] MetricsGrabber test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/21/23 13:17:05.388�[0m Jan 21 13:17:05.389: INFO: >>> kubeConfig: /root/.kube/config �[1mSTEP:�[0m Building a namespace api object, basename metrics-grabber �[38;5;243m01/21/23 13:17:05.39�[0m Jan 21 13:17:05.517: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:07.643: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:09.638: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:11.641: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:13.640: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:15.640: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:17.643: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:19.641: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:21.643: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:23.657: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:25.771: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:42.955: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:43.078: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:43.078: INFO: Unexpected error: <*errors.errorString | 0xc0001eb840>: { s: "timed out waiting for the condition", } Jan 21 13:17:43.079: FAIL: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc000d47b80) test/e2e/framework/framework.go:244 +0x7bf [AfterEach] [sig-instrumentation] MetricsGrabber test/e2e/framework/framework.go:187 Jan 21 13:17:43.079: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready Jan 21 13:17:43.201: FAIL: All nodes should be ready after test, Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 52.28.228.130:443: connect: connection refused Full Stack Trace
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-network\]\sDNS\sshould\sprovide\sDNS\sfor\sservices\s\s\[Conformance\]$'
test/e2e/network/dns_common.go:503 k8s.io/kubernetes/test/e2e/network.validateDNSResults(0xc00098c420, 0xc001490400, {0xc002863680, 0x10, 0x18}) test/e2e/network/dns_common.go:503 +0x2b0 k8s.io/kubernetes/test/e2e/network.glob..func2.5() test/e2e/network/dns.go:184 +0xc25from junit_01.xml
{"msg":"FAILED [sig-network] DNS should provide DNS for services [Conformance]","completed":7,"skipped":89,"failed":1,"failures":["[sig-network] DNS should provide DNS for services [Conformance]"]} [BeforeEach] [sig-network] DNS test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/21/23 13:16:32.295�[0m Jan 21 13:16:32.295: INFO: >>> kubeConfig: /root/.kube/config �[1mSTEP:�[0m Building a namespace api object, basename dns �[38;5;243m01/21/23 13:16:32.296�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/21/23 13:16:32.638�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/21/23 13:16:32.863�[0m [It] should provide DNS for services [Conformance] test/e2e/network/dns.go:137 �[1mSTEP:�[0m Creating a test headless service �[38;5;243m01/21/23 13:16:33.089�[0m �[1mSTEP:�[0m Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-2197.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-2197.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-2197.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-2197.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-2197.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-2197.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-2197.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-2197.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-2197.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-2197.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-2197.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-2197.svc.cluster.local;check="$$(dig +notcp +noall +answer +search 232.135.65.100.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/100.65.135.232_udp@PTR;check="$$(dig +tcp +noall +answer +search 232.135.65.100.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/100.65.135.232_tcp@PTR;sleep 1; done �[38;5;243m01/21/23 13:16:33.377�[0m �[1mSTEP:�[0m Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-2197.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-2197.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-2197.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-2197.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-2197.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-2197.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-2197.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-2197.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-2197.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-2197.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-2197.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-2197.svc.cluster.local;check="$$(dig +notcp +noall +answer +search 232.135.65.100.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/100.65.135.232_udp@PTR;check="$$(dig +tcp +noall +answer +search 232.135.65.100.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/100.65.135.232_tcp@PTR;sleep 1; done �[38;5;243m01/21/23 13:16:33.377�[0m �[1mSTEP:�[0m creating a pod to probe DNS �[38;5;243m01/21/23 13:16:33.377�[0m �[1mSTEP:�[0m submitting the pod to kubernetes �[38;5;243m01/21/23 13:16:33.377�[0m Jan 21 13:16:33.499: INFO: Waiting up to 15m0s for pod "dns-test-0e33e3ea-0221-43cf-af9f-b5a69238f64f" in namespace "dns-2197" to be "running" Jan 21 13:16:33.614: INFO: Pod "dns-test-0e33e3ea-0221-43cf-af9f-b5a69238f64f": Phase="Pending", Reason="", readiness=false. Elapsed: 114.792035ms Jan 21 13:16:35.730: INFO: Pod "dns-test-0e33e3ea-0221-43cf-af9f-b5a69238f64f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.230495682s Jan 21 13:16:37.734: INFO: Pod "dns-test-0e33e3ea-0221-43cf-af9f-b5a69238f64f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.235033299s Jan 21 13:16:39.730: INFO: Pod "dns-test-0e33e3ea-0221-43cf-af9f-b5a69238f64f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.230890744s Jan 21 13:16:41.729: INFO: Pod "dns-test-0e33e3ea-0221-43cf-af9f-b5a69238f64f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.229849036s Jan 21 13:16:43.728: INFO: Pod "dns-test-0e33e3ea-0221-43cf-af9f-b5a69238f64f": Phase="Pending", Reason="", readiness=false. Elapsed: 10.229044246s Jan 21 13:16:45.729: INFO: Pod "dns-test-0e33e3ea-0221-43cf-af9f-b5a69238f64f": Phase="Pending", Reason="", readiness=false. Elapsed: 12.229814579s Jan 21 13:16:47.731: INFO: Pod "dns-test-0e33e3ea-0221-43cf-af9f-b5a69238f64f": Phase="Pending", Reason="", readiness=false. Elapsed: 14.231753238s Jan 21 13:16:49.735: INFO: Pod "dns-test-0e33e3ea-0221-43cf-af9f-b5a69238f64f": Phase="Pending", Reason="", readiness=false. Elapsed: 16.235725388s Jan 21 13:16:51.733: INFO: Pod "dns-test-0e33e3ea-0221-43cf-af9f-b5a69238f64f": Phase="Pending", Reason="", readiness=false. Elapsed: 18.233728538s Jan 21 13:16:53.773: INFO: Pod "dns-test-0e33e3ea-0221-43cf-af9f-b5a69238f64f": Phase="Pending", Reason="", readiness=false. Elapsed: 20.274336754s Jan 21 13:16:55.727: INFO: Pod "dns-test-0e33e3ea-0221-43cf-af9f-b5a69238f64f": Phase="Pending", Reason="", readiness=false. Elapsed: 22.228434731s Jan 21 13:16:57.728: INFO: Pod "dns-test-0e33e3ea-0221-43cf-af9f-b5a69238f64f": Phase="Pending", Reason="", readiness=false. Elapsed: 24.228986731s Jan 21 13:16:59.728: INFO: Pod "dns-test-0e33e3ea-0221-43cf-af9f-b5a69238f64f": Phase="Pending", Reason="", readiness=false. Elapsed: 26.229438646s Jan 21 13:17:01.729: INFO: Pod "dns-test-0e33e3ea-0221-43cf-af9f-b5a69238f64f": Phase="Pending", Reason="", readiness=false. Elapsed: 28.229587879s Jan 21 13:17:03.734: INFO: Pod "dns-test-0e33e3ea-0221-43cf-af9f-b5a69238f64f": Phase="Pending", Reason="", readiness=false. Elapsed: 30.234618725s Jan 21 13:17:05.741: INFO: Encountered non-retryable error while getting pod dns-2197/dns-test-0e33e3ea-0221-43cf-af9f-b5a69238f64f: Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces/dns-2197/pods/dns-test-0e33e3ea-0221-43cf-af9f-b5a69238f64f": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:05.742: INFO: Unexpected error: <*fmt.wrapError | 0xc0039f27e0>: { msg: "error while waiting for pod dns-2197/dns-test-0e33e3ea-0221-43cf-af9f-b5a69238f64f to be running: Get \"https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces/dns-2197/pods/dns-test-0e33e3ea-0221-43cf-af9f-b5a69238f64f\": dial tcp 52.28.228.130:443: connect: connection refused", err: <*url.Error | 0xc003acc2a0>{ Op: "Get", URL: "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces/dns-2197/pods/dns-test-0e33e3ea-0221-43cf-af9f-b5a69238f64f", Err: <*net.OpError | 0xc00398d9f0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc003acc270>{ IP: [52, 28, 228, 130], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc0039f27a0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, }, } Jan 21 13:17:05.742: FAIL: error while waiting for pod dns-2197/dns-test-0e33e3ea-0221-43cf-af9f-b5a69238f64f to be running: Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces/dns-2197/pods/dns-test-0e33e3ea-0221-43cf-af9f-b5a69238f64f": dial tcp 52.28.228.130:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/network.validateDNSResults(0xc00098c420, 0xc001490400, {0xc002863680, 0x10, 0x18}) test/e2e/network/dns_common.go:503 +0x2b0 k8s.io/kubernetes/test/e2e/network.glob..func2.5() test/e2e/network/dns.go:184 +0xc25 �[1mSTEP:�[0m deleting the pod �[38;5;243m01/21/23 13:17:05.742�[0m �[1mSTEP:�[0m deleting the test service �[38;5;243m01/21/23 13:17:05.868�[0m �[1mSTEP:�[0m deleting the test headless service �[38;5;243m01/21/23 13:17:05.995�[0m [AfterEach] [sig-network] DNS test/e2e/framework/framework.go:187 �[1mSTEP:�[0m Collecting events from namespace "dns-2197". �[38;5;243m01/21/23 13:17:06.119�[0m Jan 21 13:17:06.243: INFO: Unexpected error: failed to list events in namespace "dns-2197": <*url.Error | 0xc003942d50>: { Op: "Get", URL: "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces/dns-2197/events", Err: <*net.OpError | 0xc00392eb40>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc003acd590>{ IP: [52, 28, 228, 130], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc0038e90a0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Jan 21 13:17:06.243: FAIL: failed to list events in namespace "dns-2197": Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces/dns-2197/events": dial tcp 52.28.228.130:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework.dumpEventsInNamespace(0xc0038d1590, {0xc0038b6ab0, 0x8}) test/e2e/framework/util.go:901 +0x191 k8s.io/kubernetes/test/e2e/framework.DumpAllNamespaceInfo({0x7ca63f8, 0xc0032a1680}, {0xc0038b6ab0, 0x8}) test/e2e/framework/util.go:919 +0x8d k8s.io/kubernetes/test/e2e/framework.NewFramework.func1(0xc00098c420, 0x1?) test/e2e/framework/framework.go:181 +0x8b k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc00098c420) test/e2e/framework/framework.go:435 +0x21d �[1mSTEP:�[0m Destroying namespace "dns-2197" for this suite. �[38;5;243m01/21/23 13:17:06.243�[0m Jan 21 13:17:06.366: FAIL: Couldn't delete ns: "dns-2197": Delete "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces/dns-2197": dial tcp 52.28.228.130:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces/dns-2197", Err:(*net.OpError)(0xc003b84190)}) Full Stack Trace panic({0x6ea5bc0, 0xc0038f3c00}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1() test/e2e/framework/ginkgowrapper/wrapper.go:73 +0x7d panic({0x6ea7de0, 0xc00394d340}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail({0xc0004bae10, 0xf0}, {0xc0038d1048?, 0x735f76c?, 0xc0038d1068?}) test/e2e/framework/ginkgowrapper/wrapper.go:77 +0x197 k8s.io/kubernetes/test/e2e/framework.Fail({0xc0001968c0, 0xdb}, {0xc0038d10e0?, 0xc0039582c0?, 0xc0038d1108?}) test/e2e/framework/log.go:63 +0x145 k8s.io/kubernetes/test/e2e/framework.ExpectNoErrorWithOffset(0x1, {0x7c388e0, 0xc003942d50}, {0xc0038e90e0?, 0x0?, 0x0?}) test/e2e/framework/expect.go:76 +0x267 k8s.io/kubernetes/test/e2e/framework.ExpectNoError(...) test/e2e/framework/expect.go:43 k8s.io/kubernetes/test/e2e/framework.dumpEventsInNamespace(0xc0038d1590, {0xc0038b6ab0, 0x8}) test/e2e/framework/util.go:901 +0x191 k8s.io/kubernetes/test/e2e/framework.DumpAllNamespaceInfo({0x7ca63f8, 0xc0032a1680}, {0xc0038b6ab0, 0x8}) test/e2e/framework/util.go:919 +0x8d k8s.io/kubernetes/test/e2e/framework.NewFramework.func1(0xc00098c420, 0x1?) test/e2e/framework/framework.go:181 +0x8b k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc00098c420) test/e2e/framework/framework.go:435 +0x21d
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-network\]\sKubeProxy\sshould\sset\sTCP\sCLOSE\_WAIT\stimeout\s\[Privileged\]$'
test/e2e/framework/framework.go:244 k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc0009dcdc0) test/e2e/framework/framework.go:244 +0x7bffrom junit_01.xml
{"msg":"FAILED [sig-network] KubeProxy should set TCP CLOSE_WAIT timeout [Privileged]","completed":1,"skipped":52,"failed":2,"failures":["[sig-storage] Dynamic Provisioning Invalid AWS KMS key should report an error and create no PV","[sig-network] KubeProxy should set TCP CLOSE_WAIT timeout [Privileged]"]} [BeforeEach] [sig-network] KubeProxy test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/21/23 13:17:05.652�[0m Jan 21 13:17:05.652: INFO: >>> kubeConfig: /root/.kube/config �[1mSTEP:�[0m Building a namespace api object, basename kube-proxy �[38;5;243m01/21/23 13:17:05.653�[0m Jan 21 13:17:05.778: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:07.902: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:09.906: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:11.900: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:13.902: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:15.902: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:17.903: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:19.900: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:21.900: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:23.902: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:25.903: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:43.217: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:43.349: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:43.349: INFO: Unexpected error: <*errors.errorString | 0xc000173900>: { s: "timed out waiting for the condition", } Jan 21 13:17:43.350: FAIL: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc0009dcdc0) test/e2e/framework/framework.go:244 +0x7bf [AfterEach] [sig-network] KubeProxy test/e2e/framework/framework.go:187 Jan 21 13:17:43.350: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready Jan 21 13:17:43.471: FAIL: All nodes should be ready after test, Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 52.28.228.130:443: connect: connection refused Full Stack Trace
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-network\]\sNetpol\sAPI\sshould\ssupport\screating\sNetworkPolicy\sAPI\soperations$'
test/e2e/framework/framework.go:244 k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc000bba160) test/e2e/framework/framework.go:244 +0x7bffrom junit_01.xml
{"msg":"FAILED [sig-network] Netpol API should support creating NetworkPolicy API operations","completed":9,"skipped":92,"failed":2,"failures":["[sig-storage] PersistentVolumes-local [Volume type: blockfswithoutformat] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","[sig-network] Netpol API should support creating NetworkPolicy API operations"]} [BeforeEach] [sig-network] Netpol API test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/21/23 13:17:05.916�[0m Jan 21 13:17:05.916: INFO: >>> kubeConfig: /root/.kube/config �[1mSTEP:�[0m Building a namespace api object, basename netpol �[38;5;243m01/21/23 13:17:05.917�[0m Jan 21 13:17:06.040: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:08.169: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:10.165: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:12.166: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:14.163: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:16.170: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:18.168: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:20.163: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:22.165: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:24.164: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:26.163: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:43.472: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:43.594: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:43.594: INFO: Unexpected error: <*errors.errorString | 0xc000215c50>: { s: "timed out waiting for the condition", } Jan 21 13:17:43.594: FAIL: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc000bba160) test/e2e/framework/framework.go:244 +0x7bf [AfterEach] [sig-network] Netpol API test/e2e/framework/framework.go:187 Jan 21 13:17:43.594: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready Jan 21 13:17:43.720: FAIL: All nodes should be ready after test, Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 52.28.228.130:443: connect: connection refused Full Stack Trace
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-network\]\sNetworking\sGranular\sChecks\:\sServices\sshould\sfunction\sfor\sendpoint\-Service\:\sudp$'
test/e2e/framework/framework.go:244 k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc000c126e0) test/e2e/framework/framework.go:244 +0x7bffrom junit_01.xml
{"msg":"FAILED [sig-network] Networking Granular Checks: Services should function for endpoint-Service: udp","completed":10,"skipped":84,"failed":2,"failures":["[sig-storage] Subpath Container restart should verify that container can restart successfully after configmaps modified","[sig-network] Networking Granular Checks: Services should function for endpoint-Service: udp"]} [BeforeEach] [sig-network] Networking test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/21/23 13:17:06.675�[0m Jan 21 13:17:06.675: INFO: >>> kubeConfig: /root/.kube/config �[1mSTEP:�[0m Building a namespace api object, basename nettest �[38;5;243m01/21/23 13:17:06.676�[0m Jan 21 13:17:06.799: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:08.926: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:10.924: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:12.923: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:14.922: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:16.929: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:18.925: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:20.924: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:22.924: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:24.926: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:42.189: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:42.318: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:42.318: INFO: Unexpected error: <*errors.errorString | 0xc000293be0>: { s: "timed out waiting for the condition", } Jan 21 13:17:42.318: FAIL: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc000c126e0) test/e2e/framework/framework.go:244 +0x7bf [AfterEach] [sig-network] Networking test/e2e/framework/framework.go:187 Jan 21 13:17:42.319: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready Jan 21 13:17:42.445: FAIL: All nodes should be ready after test, Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 52.28.228.130:443: connect: connection refused Full Stack Trace
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-network\]\sServices\sshould\sbe\sable\sto\sup\sand\sdown\sservices$'
test/e2e/network/service.go:3974 k8s.io/kubernetes/test/e2e/network.launchHostExecPod({0x7ca63f8, 0xc0027c3800}, {0xc002effa80, 0xd}, {0x73f5c75, 0x1f}) test/e2e/network/service.go:3974 +0x1bd k8s.io/kubernetes/test/e2e/network.verifyServeHostnameServiceUp({0x7ca63f8, 0xc0027c3800}, {0xc002effa80, 0xd}, {0xc003090000, 0x3, 0x3}, {0xc003994400, 0xc}, 0x50) test/e2e/network/service.go:324 +0xa5 k8s.io/kubernetes/test/e2e/network.glob..func25.9() test/e2e/network/service.go:1160 +0x633from junit_01.xml
{"msg":"FAILED [sig-network] Services should be able to up and down services","completed":8,"skipped":107,"failed":1,"failures":["[sig-network] Services should be able to up and down services"]} [BeforeEach] [sig-network] Services test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/21/23 13:15:42.635�[0m Jan 21 13:15:42.635: INFO: >>> kubeConfig: /root/.kube/config �[1mSTEP:�[0m Building a namespace api object, basename services �[38;5;243m01/21/23 13:15:42.636�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/21/23 13:15:42.97�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/21/23 13:15:43.19�[0m [BeforeEach] [sig-network] Services test/e2e/network/service.go:758 [It] should be able to up and down services test/e2e/network/service.go:1132 �[1mSTEP:�[0m creating up-down-1 in namespace services-3524 �[38;5;243m01/21/23 13:15:43.411�[0m �[1mSTEP:�[0m creating service up-down-1 in namespace services-3524 �[38;5;243m01/21/23 13:15:43.411�[0m �[1mSTEP:�[0m creating replication controller up-down-1 in namespace services-3524 �[38;5;243m01/21/23 13:15:43.531�[0m I0121 13:15:43.645242 6832 runners.go:193] Created replication controller with name: up-down-1, namespace: services-3524, replica count: 3 I0121 13:15:46.797319 6832 runners.go:193] up-down-1 Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0121 13:15:49.797848 6832 runners.go:193] up-down-1 Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0121 13:15:52.798188 6832 runners.go:193] up-down-1 Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady �[1mSTEP:�[0m creating up-down-2 in namespace services-3524 �[38;5;243m01/21/23 13:15:52.912�[0m �[1mSTEP:�[0m creating service up-down-2 in namespace services-3524 �[38;5;243m01/21/23 13:15:52.912�[0m �[1mSTEP:�[0m creating replication controller up-down-2 in namespace services-3524 �[38;5;243m01/21/23 13:15:53.037�[0m I0121 13:15:53.150319 6832 runners.go:193] Created replication controller with name: up-down-2, namespace: services-3524, replica count: 3 I0121 13:15:56.301090 6832 runners.go:193] up-down-2 Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0121 13:15:59.302043 6832 runners.go:193] up-down-2 Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0121 13:16:02.302772 6832 runners.go:193] up-down-2 Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady �[1mSTEP:�[0m verifying service up-down-1 is up �[38;5;243m01/21/23 13:16:02.415�[0m Jan 21 13:16:02.415: INFO: Creating new host exec pod Jan 21 13:16:02.535: INFO: Waiting up to 5m0s for pod "verify-service-up-host-exec-pod" in namespace "services-3524" to be "running and ready" Jan 21 13:16:02.649: INFO: Pod "verify-service-up-host-exec-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 114.368542ms Jan 21 13:16:02.649: INFO: The phase of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true) Jan 21 13:16:04.762: INFO: Pod "verify-service-up-host-exec-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 2.227189534s Jan 21 13:16:04.762: INFO: The phase of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true) Jan 21 13:16:06.763: INFO: Pod "verify-service-up-host-exec-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 4.228616825s Jan 21 13:16:06.763: INFO: The phase of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true) Jan 21 13:16:08.761: INFO: Pod "verify-service-up-host-exec-pod": Phase="Running", Reason="", readiness=true. Elapsed: 6.226067082s Jan 21 13:16:08.761: INFO: The phase of Pod verify-service-up-host-exec-pod is Running (Ready = true) Jan 21 13:16:08.761: INFO: Pod "verify-service-up-host-exec-pod" satisfied condition "running and ready" Jan 21 13:16:08.761: INFO: Creating new exec pod Jan 21 13:16:08.875: INFO: Waiting up to 5m0s for pod "verify-service-up-exec-pod-f9txc" in namespace "services-3524" to be "running" Jan 21 13:16:08.986: INFO: Pod "verify-service-up-exec-pod-f9txc": Phase="Pending", Reason="", readiness=false. Elapsed: 110.726757ms Jan 21 13:16:11.103: INFO: Pod "verify-service-up-exec-pod-f9txc": Phase="Running", Reason="", readiness=true. Elapsed: 2.227888682s Jan 21 13:16:11.103: INFO: Pod "verify-service-up-exec-pod-f9txc" satisfied condition "running" �[1mSTEP:�[0m verifying service has 3 reachable backends �[38;5;243m01/21/23 13:16:11.103�[0m Jan 21 13:16:11.103: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -O - -T 1 http://100.69.47.37:80 2>&1 || true; echo; done" in pod services-3524/verify-service-up-host-exec-pod Jan 21 13:16:11.103: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/ec436c25-998b-11ed-a697-56ea552f9d82/kubectl --server=https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-3524 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -O - -T 1 http://100.69.47.37:80 2>&1 || true; echo; done' Jan 21 13:16:12.758: INFO: stderr: "+ seq 1 150\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n" Jan 21 13:16:12.758: INFO: stdout: "up-down-1-qllvw\nup-down-1-zc66q\nup-down-1-62qz5\nup-down-1-qllvw\nup-down-1-qllvw\nup-down-1-zc66q\nup-down-1-zc66q\nup-down-1-62qz5\nup-down-1-qllvw\nup-down-1-qllvw\nup-down-1-qllvw\nup-down-1-62qz5\nup-down-1-zc66q\nup-down-1-qllvw\nup-down-1-62qz5\nup-down-1-qllvw\nup-down-1-qllvw\nup-down-1-zc66q\nup-down-1-zc66q\nup-down-1-qllvw\nup-down-1-zc66q\nup-down-1-62qz5\nup-down-1-62qz5\nup-down-1-62qz5\nup-down-1-qllvw\nup-down-1-zc66q\nup-down-1-zc66q\nup-down-1-zc66q\nup-down-1-zc66q\nup-down-1-qllvw\nup-down-1-62qz5\nup-down-1-qllvw\nup-down-1-62qz5\nup-down-1-zc66q\nup-down-1-zc66q\nup-down-1-62qz5\nup-down-1-zc66q\nup-down-1-qllvw\nup-down-1-qllvw\nup-down-1-qllvw\nup-down-1-62qz5\nup-down-1-qllvw\nup-down-1-62qz5\nup-down-1-qllvw\nup-down-1-62qz5\nup-down-1-zc66q\nup-down-1-zc66q\nup-down-1-qllvw\nup-down-1-62qz5\nup-down-1-62qz5\nup-down-1-zc66q\nup-down-1-zc66q\nup-down-1-zc66q\nup-down-1-62qz5\nup-down-1-zc66q\nup-down-1-zc66q\nup-down-1-62qz5\nup-down-1-qllvw\nup-down-1-62qz5\nup-down-1-qllvw\nup-down-1-qllvw\nup-down-1-zc66q\nup-down-1-qllvw\nup-down-1-zc66q\nup-down-1-zc66q\nup-down-1-qllvw\nup-down-1-zc66q\nup-down-1-qllvw\nup-down-1-zc66q\nup-down-1-62qz5\nup-down-1-zc66q\nup-down-1-zc66q\nup-down-1-qllvw\nup-down-1-qllvw\nup-down-1-62qz5\nup-down-1-zc66q\nup-down-1-qllvw\nup-down-1-qllvw\nup-down-1-zc66q\nup-down-1-zc66q\nup-down-1-zc66q\nup-down-1-62qz5\nup-down-1-62qz5\nup-down-1-62qz5\nup-down-1-zc66q\nup-down-1-zc66q\nup-down-1-zc66q\nup-down-1-zc66q\nup-down-1-qllvw\nup-down-1-qllvw\nup-down-1-62qz5\nup-down-1-qllvw\nup-down-1-zc66q\nup-down-1-62qz5\nup-down-1-qllvw\nup-down-1-62qz5\nup-down-1-62qz5\nup-down-1-qllvw\nup-down-1-zc66q\nup-down-1-62qz5\nup-down-1-zc66q\nup-down-1-qllvw\nup-down-1-62qz5\nup-down-1-62qz5\nup-down-1-qllvw\nup-down-1-qllvw\nup-down-1-qllvw\nup-down-1-zc66q\nup-down-1-62qz5\nup-down-1-zc66q\nup-down-1-zc66q\nup-down-1-qllvw\nup-down-1-62qz5\nup-down-1-zc66q\nup-down-1-62qz5\nup-down-1-qllvw\nup-down-1-62qz5\nup-down-1-qllvw\nup-down-1-62qz5\nup-down-1-zc66q\nup-down-1-qllvw\nup-down-1-62qz5\nup-down-1-62qz5\nup-down-1-zc66q\nup-down-1-zc66q\nup-down-1-qllvw\nup-down-1-62qz5\nup-down-1-qllvw\nup-down-1-qllvw\nup-down-1-zc66q\nup-down-1-zc66q\nup-down-1-62qz5\nup-down-1-zc66q\nup-down-1-qllvw\nup-down-1-62qz5\nup-down-1-qllvw\nup-down-1-62qz5\nup-down-1-zc66q\nup-down-1-qllvw\nup-down-1-qllvw\nup-down-1-qllvw\nup-down-1-zc66q\nup-down-1-zc66q\nup-down-1-zc66q\nup-down-1-62qz5\nup-down-1-62qz5\nup-down-1-62qz5\nup-down-1-qllvw\nup-down-1-62qz5\nup-down-1-zc66q\n" Jan 21 13:16:12.758: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -O - -T 1 http://100.69.47.37:80 2>&1 || true; echo; done" in pod services-3524/verify-service-up-exec-pod-f9txc Jan 21 13:16:12.758: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/ec436c25-998b-11ed-a697-56ea552f9d82/kubectl --server=https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-3524 exec verify-service-up-exec-pod-f9txc -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -O - -T 1 http://100.69.47.37:80 2>&1 || true; echo; done' Jan 21 13:16:14.543: INFO: stderr: "+ seq 1 150\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n+ wget -q -O - -T 1 http://100.69.47.37:80\n+ echo\n" Jan 21 13:16:14.543: INFO: stdout: "up-down-1-62qz5\nup-down-1-zc66q\nup-down-1-62qz5\nup-down-1-zc66q\nup-down-1-62qz5\nup-down-1-62qz5\nup-down-1-qllvw\nup-down-1-zc66q\nup-down-1-62qz5\nup-down-1-62qz5\nup-down-1-62qz5\nup-down-1-62qz5\nup-down-1-qllvw\nup-down-1-qllvw\nup-down-1-62qz5\nup-down-1-qllvw\nup-down-1-zc66q\nup-down-1-62qz5\nup-down-1-62qz5\nup-down-1-62qz5\nup-down-1-62qz5\nup-down-1-zc66q\nup-down-1-qllvw\nup-down-1-zc66q\nup-down-1-qllvw\nup-down-1-qllvw\nup-down-1-zc66q\nup-down-1-qllvw\nup-down-1-62qz5\nup-down-1-62qz5\nup-down-1-qllvw\nup-down-1-qllvw\nup-down-1-62qz5\nup-down-1-qllvw\nup-down-1-zc66q\nup-down-1-qllvw\nup-down-1-62qz5\nup-down-1-zc66q\nup-down-1-62qz5\nup-down-1-qllvw\nup-down-1-qllvw\nup-down-1-62qz5\nup-down-1-zc66q\nup-down-1-qllvw\nup-down-1-qllvw\nup-down-1-62qz5\nup-down-1-zc66q\nup-down-1-qllvw\nup-down-1-qllvw\nup-down-1-zc66q\nup-down-1-zc66q\nup-down-1-62qz5\nup-down-1-62qz5\nup-down-1-qllvw\nup-down-1-62qz5\nup-down-1-zc66q\nup-down-1-qllvw\nup-down-1-zc66q\nup-down-1-62qz5\nup-down-1-qllvw\nup-down-1-62qz5\nup-down-1-qllvw\nup-down-1-62qz5\nup-down-1-qllvw\nup-down-1-zc66q\nup-down-1-zc66q\nup-down-1-zc66q\nup-down-1-qllvw\nup-down-1-62qz5\nup-down-1-qllvw\nup-down-1-qllvw\nup-down-1-62qz5\nup-down-1-62qz5\nup-down-1-qllvw\nup-down-1-zc66q\nup-down-1-62qz5\nup-down-1-qllvw\nup-down-1-qllvw\nup-down-1-qllvw\nup-down-1-62qz5\nup-down-1-zc66q\nup-down-1-qllvw\nup-down-1-62qz5\nup-down-1-62qz5\nup-down-1-qllvw\nup-down-1-zc66q\nup-down-1-zc66q\nup-down-1-62qz5\nup-down-1-62qz5\nup-down-1-qllvw\nup-down-1-zc66q\nup-down-1-zc66q\nup-down-1-zc66q\nup-down-1-62qz5\nup-down-1-qllvw\nup-down-1-zc66q\nup-down-1-62qz5\nup-down-1-qllvw\nup-down-1-zc66q\nup-down-1-62qz5\nup-down-1-zc66q\nup-down-1-qllvw\nup-down-1-62qz5\nup-down-1-qllvw\nup-down-1-zc66q\nup-down-1-qllvw\nup-down-1-62qz5\nup-down-1-zc66q\nup-down-1-qllvw\nup-down-1-qllvw\nup-down-1-qllvw\nup-down-1-qllvw\nup-down-1-qllvw\nup-down-1-qllvw\nup-down-1-62qz5\nup-down-1-62qz5\nup-down-1-qllvw\nup-down-1-zc66q\nup-down-1-qllvw\nup-down-1-62qz5\nup-down-1-zc66q\nup-down-1-62qz5\nup-down-1-qllvw\nup-down-1-zc66q\nup-down-1-62qz5\nup-down-1-qllvw\nup-down-1-zc66q\nup-down-1-qllvw\nup-down-1-zc66q\nup-down-1-62qz5\nup-down-1-62qz5\nup-down-1-62qz5\nup-down-1-qllvw\nup-down-1-62qz5\nup-down-1-qllvw\nup-down-1-62qz5\nup-down-1-62qz5\nup-down-1-62qz5\nup-down-1-qllvw\nup-down-1-zc66q\nup-down-1-qllvw\nup-down-1-qllvw\nup-down-1-zc66q\nup-down-1-zc66q\nup-down-1-qllvw\nup-down-1-62qz5\nup-down-1-qllvw\nup-down-1-62qz5\nup-down-1-zc66q\nup-down-1-62qz5\n" �[1mSTEP:�[0m Deleting pod verify-service-up-host-exec-pod in namespace services-3524 �[38;5;243m01/21/23 13:16:14.544�[0m �[1mSTEP:�[0m Deleting pod verify-service-up-exec-pod-f9txc in namespace services-3524 �[38;5;243m01/21/23 13:16:14.664�[0m �[1mSTEP:�[0m verifying service up-down-2 is up �[38;5;243m01/21/23 13:16:14.779�[0m Jan 21 13:16:14.779: INFO: Creating new host exec pod Jan 21 13:16:14.895: INFO: Waiting up to 5m0s for pod "verify-service-up-host-exec-pod" in namespace "services-3524" to be "running and ready" Jan 21 13:16:15.006: INFO: Pod "verify-service-up-host-exec-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 110.957329ms Jan 21 13:16:15.006: INFO: The phase of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true) Jan 21 13:16:17.118: INFO: Pod "verify-service-up-host-exec-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 2.222545993s Jan 21 13:16:17.118: INFO: The phase of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true) Jan 21 13:16:19.120: INFO: Pod "verify-service-up-host-exec-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 4.225061767s Jan 21 13:16:19.120: INFO: The phase of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true) Jan 21 13:16:21.118: INFO: Pod "verify-service-up-host-exec-pod": Phase="Running", Reason="", readiness=true. Elapsed: 6.222795653s Jan 21 13:16:21.118: INFO: The phase of Pod verify-service-up-host-exec-pod is Running (Ready = true) Jan 21 13:16:21.118: INFO: Pod "verify-service-up-host-exec-pod" satisfied condition "running and ready" Jan 21 13:16:21.118: INFO: Creating new exec pod Jan 21 13:16:21.232: INFO: Waiting up to 5m0s for pod "verify-service-up-exec-pod-4tk8j" in namespace "services-3524" to be "running" Jan 21 13:16:21.345: INFO: Pod "verify-service-up-exec-pod-4tk8j": Phase="Pending", Reason="", readiness=false. Elapsed: 113.266843ms Jan 21 13:16:23.458: INFO: Pod "verify-service-up-exec-pod-4tk8j": Phase="Pending", Reason="", readiness=false. Elapsed: 2.225515074s Jan 21 13:16:25.458: INFO: Pod "verify-service-up-exec-pod-4tk8j": Phase="Pending", Reason="", readiness=false. Elapsed: 4.225715839s Jan 21 13:16:27.469: INFO: Pod "verify-service-up-exec-pod-4tk8j": Phase="Running", Reason="", readiness=true. Elapsed: 6.237106318s Jan 21 13:16:27.469: INFO: Pod "verify-service-up-exec-pod-4tk8j" satisfied condition "running" �[1mSTEP:�[0m verifying service has 3 reachable backends �[38;5;243m01/21/23 13:16:27.469�[0m Jan 21 13:16:27.470: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -O - -T 1 http://100.64.209.6:80 2>&1 || true; echo; done" in pod services-3524/verify-service-up-host-exec-pod Jan 21 13:16:27.470: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/ec436c25-998b-11ed-a697-56ea552f9d82/kubectl --server=https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-3524 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -O - -T 1 http://100.64.209.6:80 2>&1 || true; echo; done' Jan 21 13:16:29.338: INFO: stderr: "+ seq 1 150\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n" Jan 21 13:16:29.339: INFO: stdout: "up-down-2-vkdxg\nup-down-2-vkdxg\nup-down-2-vkdxg\nup-down-2-w9s6p\nup-down-2-w9s6p\nup-down-2-w9s6p\nup-down-2-jbfwb\nup-down-2-jbfwb\nup-down-2-jbfwb\nup-down-2-jbfwb\nup-down-2-jbfwb\nup-down-2-vkdxg\nup-down-2-w9s6p\nup-down-2-vkdxg\nup-down-2-jbfwb\nup-down-2-vkdxg\nup-down-2-w9s6p\nup-down-2-w9s6p\nup-down-2-jbfwb\nup-down-2-w9s6p\nup-down-2-jbfwb\nup-down-2-jbfwb\nup-down-2-jbfwb\nup-down-2-w9s6p\nup-down-2-vkdxg\nup-down-2-vkdxg\nup-down-2-vkdxg\nup-down-2-jbfwb\nup-down-2-vkdxg\nup-down-2-vkdxg\nup-down-2-jbfwb\nup-down-2-vkdxg\nup-down-2-jbfwb\nup-down-2-vkdxg\nup-down-2-jbfwb\nup-down-2-w9s6p\nup-down-2-vkdxg\nup-down-2-vkdxg\nup-down-2-vkdxg\nup-down-2-jbfwb\nup-down-2-jbfwb\nup-down-2-w9s6p\nup-down-2-w9s6p\nup-down-2-jbfwb\nup-down-2-jbfwb\nup-down-2-w9s6p\nup-down-2-jbfwb\nup-down-2-w9s6p\nup-down-2-w9s6p\nup-down-2-jbfwb\nup-down-2-vkdxg\nup-down-2-vkdxg\nup-down-2-jbfwb\nup-down-2-jbfwb\nup-down-2-w9s6p\nup-down-2-vkdxg\nup-down-2-jbfwb\nup-down-2-jbfwb\nup-down-2-vkdxg\nup-down-2-jbfwb\nup-down-2-vkdxg\nup-down-2-jbfwb\nup-down-2-w9s6p\nup-down-2-vkdxg\nup-down-2-vkdxg\nup-down-2-jbfwb\nup-down-2-jbfwb\nup-down-2-w9s6p\nup-down-2-jbfwb\nup-down-2-vkdxg\nup-down-2-jbfwb\nup-down-2-vkdxg\nup-down-2-w9s6p\nup-down-2-vkdxg\nup-down-2-jbfwb\nup-down-2-jbfwb\nup-down-2-vkdxg\nup-down-2-w9s6p\nup-down-2-vkdxg\nup-down-2-w9s6p\nup-down-2-jbfwb\nup-down-2-vkdxg\nup-down-2-vkdxg\nup-down-2-vkdxg\nup-down-2-jbfwb\nup-down-2-w9s6p\nup-down-2-vkdxg\nup-down-2-jbfwb\nup-down-2-w9s6p\nup-down-2-jbfwb\nup-down-2-w9s6p\nup-down-2-w9s6p\nup-down-2-w9s6p\nup-down-2-w9s6p\nup-down-2-jbfwb\nup-down-2-vkdxg\nup-down-2-jbfwb\nup-down-2-w9s6p\nup-down-2-vkdxg\nup-down-2-vkdxg\nup-down-2-vkdxg\nup-down-2-w9s6p\nup-down-2-vkdxg\nup-down-2-w9s6p\nup-down-2-jbfwb\nup-down-2-jbfwb\nup-down-2-w9s6p\nup-down-2-jbfwb\nup-down-2-w9s6p\nup-down-2-w9s6p\nup-down-2-vkdxg\nup-down-2-jbfwb\nup-down-2-w9s6p\nup-down-2-vkdxg\nup-down-2-vkdxg\nup-down-2-jbfwb\nup-down-2-vkdxg\nup-down-2-jbfwb\nup-down-2-w9s6p\nup-down-2-jbfwb\nup-down-2-vkdxg\nup-down-2-jbfwb\nup-down-2-jbfwb\nup-down-2-w9s6p\nup-down-2-vkdxg\nup-down-2-w9s6p\nup-down-2-jbfwb\nup-down-2-vkdxg\nup-down-2-w9s6p\nup-down-2-jbfwb\nup-down-2-w9s6p\nup-down-2-w9s6p\nup-down-2-w9s6p\nup-down-2-jbfwb\nup-down-2-jbfwb\nup-down-2-w9s6p\nup-down-2-w9s6p\nup-down-2-vkdxg\nup-down-2-jbfwb\nup-down-2-w9s6p\nup-down-2-w9s6p\nup-down-2-vkdxg\nup-down-2-w9s6p\nup-down-2-jbfwb\nup-down-2-vkdxg\nup-down-2-jbfwb\nup-down-2-jbfwb\nup-down-2-jbfwb\nup-down-2-vkdxg\nup-down-2-vkdxg\n" Jan 21 13:16:29.339: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -O - -T 1 http://100.64.209.6:80 2>&1 || true; echo; done" in pod services-3524/verify-service-up-exec-pod-4tk8j Jan 21 13:16:29.339: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/ec436c25-998b-11ed-a697-56ea552f9d82/kubectl --server=https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-3524 exec verify-service-up-exec-pod-4tk8j -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -O - -T 1 http://100.64.209.6:80 2>&1 || true; echo; done' Jan 21 13:16:31.263: INFO: stderr: "+ seq 1 150\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n+ wget -q -O - -T 1 http://100.64.209.6:80\n+ echo\n" Jan 21 13:16:31.263: INFO: stdout: "up-down-2-jbfwb\nup-down-2-vkdxg\nup-down-2-vkdxg\nup-down-2-w9s6p\nup-down-2-jbfwb\nup-down-2-vkdxg\nup-down-2-vkdxg\nup-down-2-jbfwb\nup-down-2-vkdxg\nup-down-2-jbfwb\nup-down-2-w9s6p\nup-down-2-vkdxg\nup-down-2-vkdxg\nup-down-2-w9s6p\nup-down-2-vkdxg\nup-down-2-w9s6p\nup-down-2-jbfwb\nup-down-2-w9s6p\nup-down-2-vkdxg\nup-down-2-jbfwb\nup-down-2-w9s6p\nup-down-2-vkdxg\nup-down-2-w9s6p\nup-down-2-vkdxg\nup-down-2-vkdxg\nup-down-2-w9s6p\nup-down-2-jbfwb\nup-down-2-w9s6p\nup-down-2-jbfwb\nup-down-2-vkdxg\nup-down-2-jbfwb\nup-down-2-w9s6p\nup-down-2-jbfwb\nup-down-2-w9s6p\nup-down-2-w9s6p\nup-down-2-vkdxg\nup-down-2-jbfwb\nup-down-2-jbfwb\nup-down-2-vkdxg\nup-down-2-jbfwb\nup-down-2-w9s6p\nup-down-2-w9s6p\nup-down-2-vkdxg\nup-down-2-vkdxg\nup-down-2-jbfwb\nup-down-2-jbfwb\nup-down-2-w9s6p\nup-down-2-vkdxg\nup-down-2-w9s6p\nup-down-2-w9s6p\nup-down-2-w9s6p\nup-down-2-jbfwb\nup-down-2-jbfwb\nup-down-2-vkdxg\nup-down-2-jbfwb\nup-down-2-w9s6p\nup-down-2-vkdxg\nup-down-2-jbfwb\nup-down-2-w9s6p\nup-down-2-w9s6p\nup-down-2-jbfwb\nup-down-2-w9s6p\nup-down-2-jbfwb\nup-down-2-w9s6p\nup-down-2-w9s6p\nup-down-2-w9s6p\nup-down-2-w9s6p\nup-down-2-vkdxg\nup-down-2-jbfwb\nup-down-2-vkdxg\nup-down-2-w9s6p\nup-down-2-jbfwb\nup-down-2-jbfwb\nup-down-2-jbfwb\nup-down-2-vkdxg\nup-down-2-w9s6p\nup-down-2-w9s6p\nup-down-2-jbfwb\nup-down-2-jbfwb\nup-down-2-jbfwb\nup-down-2-vkdxg\nup-down-2-w9s6p\nup-down-2-jbfwb\nup-down-2-jbfwb\nup-down-2-vkdxg\nup-down-2-w9s6p\nup-down-2-jbfwb\nup-down-2-vkdxg\nup-down-2-jbfwb\nup-down-2-w9s6p\nup-down-2-jbfwb\nup-down-2-w9s6p\nup-down-2-w9s6p\nup-down-2-vkdxg\nup-down-2-vkdxg\nup-down-2-jbfwb\nup-down-2-jbfwb\nup-down-2-w9s6p\nup-down-2-vkdxg\nup-down-2-vkdxg\nup-down-2-vkdxg\nup-down-2-w9s6p\nup-down-2-vkdxg\nup-down-2-vkdxg\nup-down-2-vkdxg\nup-down-2-w9s6p\nup-down-2-w9s6p\nup-down-2-w9s6p\nup-down-2-jbfwb\nup-down-2-w9s6p\nup-down-2-vkdxg\nup-down-2-w9s6p\nup-down-2-w9s6p\nup-down-2-vkdxg\nup-down-2-jbfwb\nup-down-2-jbfwb\nup-down-2-w9s6p\nup-down-2-vkdxg\nup-down-2-jbfwb\nup-down-2-jbfwb\nup-down-2-w9s6p\nup-down-2-jbfwb\nup-down-2-w9s6p\nup-down-2-vkdxg\nup-down-2-w9s6p\nup-down-2-vkdxg\nup-down-2-w9s6p\nup-down-2-vkdxg\nup-down-2-jbfwb\nup-down-2-w9s6p\nup-down-2-vkdxg\nup-down-2-jbfwb\nup-down-2-vkdxg\nup-down-2-w9s6p\nup-down-2-jbfwb\nup-down-2-vkdxg\nup-down-2-vkdxg\nup-down-2-w9s6p\nup-down-2-w9s6p\nup-down-2-jbfwb\nup-down-2-vkdxg\nup-down-2-vkdxg\nup-down-2-vkdxg\nup-down-2-w9s6p\nup-down-2-vkdxg\nup-down-2-jbfwb\nup-down-2-jbfwb\nup-down-2-vkdxg\nup-down-2-jbfwb\nup-down-2-jbfwb\n" �[1mSTEP:�[0m Deleting pod verify-service-up-host-exec-pod in namespace services-3524 �[38;5;243m01/21/23 13:16:31.263�[0m �[1mSTEP:�[0m Deleting pod verify-service-up-exec-pod-4tk8j in namespace services-3524 �[38;5;243m01/21/23 13:16:31.391�[0m �[1mSTEP:�[0m stopping service up-down-1 �[38;5;243m01/21/23 13:16:31.516�[0m �[1mSTEP:�[0m deleting ReplicationController up-down-1 in namespace services-3524, will wait for the garbage collector to delete the pods �[38;5;243m01/21/23 13:16:31.516�[0m Jan 21 13:16:31.893: INFO: Deleting ReplicationController up-down-1 took: 113.638932ms Jan 21 13:16:31.993: INFO: Terminating ReplicationController up-down-1 pods took: 100.866071ms �[1mSTEP:�[0m verifying service up-down-1 is not up �[38;5;243m01/21/23 13:16:42.751�[0m Jan 21 13:16:42.752: INFO: Creating new host exec pod Jan 21 13:16:42.871: INFO: Waiting up to 5m0s for pod "verify-service-down-host-exec-pod" in namespace "services-3524" to be "running and ready" Jan 21 13:16:42.983: INFO: Pod "verify-service-down-host-exec-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 111.627347ms Jan 21 13:16:42.983: INFO: The phase of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true) Jan 21 13:16:45.096: INFO: Pod "verify-service-down-host-exec-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 2.224298902s Jan 21 13:16:45.096: INFO: The phase of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true) Jan 21 13:16:47.096: INFO: Pod "verify-service-down-host-exec-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 4.22420692s Jan 21 13:16:47.096: INFO: The phase of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true) Jan 21 13:16:49.111: INFO: Pod "verify-service-down-host-exec-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 6.23961051s Jan 21 13:16:49.111: INFO: The phase of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true) Jan 21 13:16:51.096: INFO: Pod "verify-service-down-host-exec-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 8.22452862s Jan 21 13:16:51.096: INFO: The phase of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true) Jan 21 13:16:53.099: INFO: Pod "verify-service-down-host-exec-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 10.227365245s Jan 21 13:16:53.099: INFO: The phase of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true) Jan 21 13:16:55.095: INFO: Pod "verify-service-down-host-exec-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 12.223584803s Jan 21 13:16:55.095: INFO: The phase of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true) Jan 21 13:16:57.096: INFO: Pod "verify-service-down-host-exec-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 14.224339852s Jan 21 13:16:57.096: INFO: The phase of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true) Jan 21 13:16:59.095: INFO: Pod "verify-service-down-host-exec-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 16.223067474s Jan 21 13:16:59.095: INFO: The phase of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true) Jan 21 13:17:01.095: INFO: Pod "verify-service-down-host-exec-pod": Phase="Running", Reason="", readiness=true. Elapsed: 18.223211525s Jan 21 13:17:01.095: INFO: The phase of Pod verify-service-down-host-exec-pod is Running (Ready = true) Jan 21 13:17:01.095: INFO: Pod "verify-service-down-host-exec-pod" satisfied condition "running and ready" Jan 21 13:17:01.095: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/ec436c25-998b-11ed-a697-56ea552f9d82/kubectl --server=https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-3524 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://100.69.47.37:80 && echo service-down-failed' Jan 21 13:17:04.575: INFO: rc: 28 Jan 21 13:17:04.575: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://100.69.47.37:80 && echo service-down-failed" in pod services-3524/verify-service-down-host-exec-pod: error running /home/prow/go/src/k8s.io/kops/_rundir/ec436c25-998b-11ed-a697-56ea552f9d82/kubectl --server=https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-3524 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://100.69.47.37:80 && echo service-down-failed: Command stdout: stderr: + curl -g -s --connect-timeout 2 http://100.69.47.37:80 command terminated with exit code 28 error: exit status 28 Output: �[1mSTEP:�[0m Deleting pod verify-service-down-host-exec-pod in namespace services-3524 �[38;5;243m01/21/23 13:17:04.575�[0m �[1mSTEP:�[0m verifying service up-down-2 is still up �[38;5;243m01/21/23 13:17:04.7�[0m Jan 21 13:17:04.700: INFO: Creating new host exec pod Jan 21 13:17:04.819: INFO: Waiting up to 5m0s for pod "verify-service-up-host-exec-pod" in namespace "services-3524" to be "running and ready" Jan 21 13:17:04.938: INFO: Pod "verify-service-up-host-exec-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 118.266193ms Jan 21 13:17:04.938: INFO: The phase of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true) Jan 21 13:17:07.061: INFO: Encountered non-retryable error while getting pod services-3524/verify-service-up-host-exec-pod: Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces/services-3524/pods/verify-service-up-host-exec-pod": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:07.061: INFO: Unexpected error: <*fmt.wrapError | 0xc002f5c600>: { msg: "error while waiting for pod services-3524/verify-service-up-host-exec-pod to be running and ready: Get \"https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces/services-3524/pods/verify-service-up-host-exec-pod\": dial tcp 52.28.228.130:443: connect: connection refused", err: <*url.Error | 0xc00238ba10>{ Op: "Get", URL: "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces/services-3524/pods/verify-service-up-host-exec-pod", Err: <*net.OpError | 0xc003990b90>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc00275c330>{ IP: [52, 28, 228, 130], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc002f5c5c0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, }, } Jan 21 13:17:07.061: FAIL: error while waiting for pod services-3524/verify-service-up-host-exec-pod to be running and ready: Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces/services-3524/pods/verify-service-up-host-exec-pod": dial tcp 52.28.228.130:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/network.launchHostExecPod({0x7ca63f8, 0xc0027c3800}, {0xc002effa80, 0xd}, {0x73f5c75, 0x1f}) test/e2e/network/service.go:3974 +0x1bd k8s.io/kubernetes/test/e2e/network.verifyServeHostnameServiceUp({0x7ca63f8, 0xc0027c3800}, {0xc002effa80, 0xd}, {0xc003090000, 0x3, 0x3}, {0xc003994400, 0xc}, 0x50) test/e2e/network/service.go:324 +0xa5 k8s.io/kubernetes/test/e2e/network.glob..func25.9() test/e2e/network/service.go:1160 +0x633 [AfterEach] [sig-network] Services test/e2e/framework/framework.go:187 �[1mSTEP:�[0m Collecting events from namespace "services-3524". �[38;5;243m01/21/23 13:17:07.062�[0m Jan 21 13:17:07.186: INFO: Unexpected error: failed to list events in namespace "services-3524": <*url.Error | 0xc002f30480>: { Op: "Get", URL: "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces/services-3524/events", Err: <*net.OpError | 0xc003990f50>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc002ee46f0>{ IP: [52, 28, 228, 130], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc002f5ca40>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Jan 21 13:17:07.186: FAIL: failed to list events in namespace "services-3524": Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces/services-3524/events": dial tcp 52.28.228.130:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework.dumpEventsInNamespace(0xc001d15590, {0xc002effa80, 0xd}) test/e2e/framework/util.go:901 +0x191 k8s.io/kubernetes/test/e2e/framework.DumpAllNamespaceInfo({0x7ca63f8, 0xc0027c3800}, {0xc002effa80, 0xd}) test/e2e/framework/util.go:919 +0x8d k8s.io/kubernetes/test/e2e/framework.NewFramework.func1(0xc000d16000, 0x1?) test/e2e/framework/framework.go:181 +0x8b k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc000d16000) test/e2e/framework/framework.go:435 +0x21d �[1mSTEP:�[0m Destroying namespace "services-3524" for this suite. �[38;5;243m01/21/23 13:17:07.186�[0m Jan 21 13:17:07.312: FAIL: Couldn't delete ns: "services-3524": Delete "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces/services-3524": dial tcp 52.28.228.130:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces/services-3524", Err:(*net.OpError)(0xc00316eaa0)}) Full Stack Trace panic({0x6ea5bc0, 0xc0008b8880}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1() test/e2e/framework/ginkgowrapper/wrapper.go:73 +0x7d panic({0x6ea7de0, 0xc000bc8460}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail({0xc00058d800, 0xfa}, {0xc001d15048?, 0x735f76c?, 0xc001d15068?}) test/e2e/framework/ginkgowrapper/wrapper.go:77 +0x197 k8s.io/kubernetes/test/e2e/framework.Fail({0xc00150e4b0, 0xe5}, {0xc001d150e0?, 0xc002b73200?, 0xc001d15108?}) test/e2e/framework/log.go:63 +0x145 k8s.io/kubernetes/test/e2e/framework.ExpectNoErrorWithOffset(0x1, {0x7c388e0, 0xc002f30480}, {0xc002f5ca80?, 0x0?, 0x0?}) test/e2e/framework/expect.go:76 +0x267 k8s.io/kubernetes/test/e2e/framework.ExpectNoError(...) test/e2e/framework/expect.go:43 k8s.io/kubernetes/test/e2e/framework.dumpEventsInNamespace(0xc001d15590, {0xc002effa80, 0xd}) test/e2e/framework/util.go:901 +0x191 k8s.io/kubernetes/test/e2e/framework.DumpAllNamespaceInfo({0x7ca63f8, 0xc0027c3800}, {0xc002effa80, 0xd}) test/e2e/framework/util.go:919 +0x8d k8s.io/kubernetes/test/e2e/framework.NewFramework.func1(0xc000d16000, 0x1?) test/e2e/framework/framework.go:181 +0x8b k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc000d16000) test/e2e/framework/framework.go:435 +0x21d [AfterEach] [sig-network] Services test/e2e/network/service.go:762
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-node\]\sProbing\scontainer\sshould\s\*not\*\sbe\srestarted\swith\sa\s\/healthz\shttp\sliveness\sprobe\s\[NodeConformance\]\s\[Conformance\]$'
test/e2e/common/node/container_probe.go:910 k8s.io/kubernetes/test/e2e/common/node.RunLivenessTest(0xc000a9c000, 0xc001580400, 0x0, 0xe?) test/e2e/common/node/container_probe.go:910 +0x96b k8s.io/kubernetes/test/e2e/common/node.glob..func2.9() test/e2e/common/node/container_probe.go:219 +0x118from junit_01.xml
{"msg":"FAILED [sig-node] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","completed":1,"skipped":24,"failed":1,"failures":["[sig-node] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]"]} [BeforeEach] [sig-node] Probing container test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/21/23 13:15:42.153�[0m Jan 21 13:15:42.153: INFO: >>> kubeConfig: /root/.kube/config �[1mSTEP:�[0m Building a namespace api object, basename container-probe �[38;5;243m01/21/23 13:15:42.154�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/21/23 13:15:42.5�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/21/23 13:15:42.724�[0m [BeforeEach] [sig-node] Probing container test/e2e/common/node/container_probe.go:59 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] test/e2e/common/node/container_probe.go:211 �[1mSTEP:�[0m Creating pod test-webserver-790427d1-6992-45af-b377-e7e40d96956d in namespace container-probe-3870 �[38;5;243m01/21/23 13:15:42.949�[0m Jan 21 13:15:43.070: INFO: Waiting up to 5m0s for pod "test-webserver-790427d1-6992-45af-b377-e7e40d96956d" in namespace "container-probe-3870" to be "not pending" Jan 21 13:15:43.183: INFO: Pod "test-webserver-790427d1-6992-45af-b377-e7e40d96956d": Phase="Pending", Reason="", readiness=false. Elapsed: 112.572822ms Jan 21 13:15:45.296: INFO: Pod "test-webserver-790427d1-6992-45af-b377-e7e40d96956d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.225906216s Jan 21 13:15:47.298: INFO: Pod "test-webserver-790427d1-6992-45af-b377-e7e40d96956d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.227769236s Jan 21 13:15:49.299: INFO: Pod "test-webserver-790427d1-6992-45af-b377-e7e40d96956d": Phase="Running", Reason="", readiness=true. Elapsed: 6.228231729s Jan 21 13:15:49.299: INFO: Pod "test-webserver-790427d1-6992-45af-b377-e7e40d96956d" satisfied condition "not pending" Jan 21 13:15:49.299: INFO: Started pod test-webserver-790427d1-6992-45af-b377-e7e40d96956d in namespace container-probe-3870 �[1mSTEP:�[0m checking the pod's current state and verifying that restartCount is present �[38;5;243m01/21/23 13:15:49.299�[0m Jan 21 13:15:49.412: INFO: Initial restart count of pod test-webserver-790427d1-6992-45af-b377-e7e40d96956d is 0 Jan 21 13:17:05.755: INFO: Unexpected error: getting pod : <*url.Error | 0xc0028d8cc0>: { Op: "Get", URL: "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces/container-probe-3870/pods/test-webserver-790427d1-6992-45af-b377-e7e40d96956d", Err: <*net.OpError | 0xc003c98140>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc002a19ec0>{ IP: [52, 28, 228, 130], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc000c3cde0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Jan 21 13:17:05.755: FAIL: getting pod : Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces/container-probe-3870/pods/test-webserver-790427d1-6992-45af-b377-e7e40d96956d": dial tcp 52.28.228.130:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/common/node.RunLivenessTest(0xc000a9c000, 0xc001580400, 0x0, 0xe?) test/e2e/common/node/container_probe.go:910 +0x96b k8s.io/kubernetes/test/e2e/common/node.glob..func2.9() test/e2e/common/node/container_probe.go:219 +0x118 �[1mSTEP:�[0m deleting the pod �[38;5;243m01/21/23 13:17:05.755�[0m [AfterEach] [sig-node] Probing container test/e2e/framework/framework.go:187 �[1mSTEP:�[0m Collecting events from namespace "container-probe-3870". �[38;5;243m01/21/23 13:17:05.755�[0m Jan 21 13:17:05.879: INFO: Unexpected error: failed to list events in namespace "container-probe-3870": <*url.Error | 0xc003b4f8f0>: { Op: "Get", URL: "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces/container-probe-3870/events", Err: <*net.OpError | 0xc002d81db0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0028a2600>{ IP: [52, 28, 228, 130], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc003c50980>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Jan 21 13:17:05.879: FAIL: failed to list events in namespace "container-probe-3870": Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces/container-probe-3870/events": dial tcp 52.28.228.130:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework.dumpEventsInNamespace(0xc00194b590, {0xc0030947e0, 0x14}) test/e2e/framework/util.go:901 +0x191 k8s.io/kubernetes/test/e2e/framework.DumpAllNamespaceInfo({0x7ca63f8, 0xc001c1a180}, {0xc0030947e0, 0x14}) test/e2e/framework/util.go:919 +0x8d k8s.io/kubernetes/test/e2e/framework.NewFramework.func1(0xc000a9c000, 0x1?) test/e2e/framework/framework.go:181 +0x8b k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc000a9c000) test/e2e/framework/framework.go:435 +0x21d �[1mSTEP:�[0m Destroying namespace "container-probe-3870" for this suite. �[38;5;243m01/21/23 13:17:05.88�[0m Jan 21 13:17:06.008: FAIL: Couldn't delete ns: "container-probe-3870": Delete "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces/container-probe-3870": dial tcp 52.28.228.130:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces/container-probe-3870", Err:(*net.OpError)(0xc0027a4230)}) Full Stack Trace panic({0x6ea5bc0, 0xc003c45400}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1() test/e2e/framework/ginkgowrapper/wrapper.go:73 +0x7d panic({0x6ea7de0, 0xc0008e9570}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail({0xc0009fefc0, 0x108}, {0xc00194b048?, 0x735f76c?, 0xc00194b068?}) test/e2e/framework/ginkgowrapper/wrapper.go:77 +0x197 k8s.io/kubernetes/test/e2e/framework.Fail({0xc00010d000, 0xf3}, {0xc00194b0e0?, 0xc0005b3200?, 0xc00194b108?}) test/e2e/framework/log.go:63 +0x145 k8s.io/kubernetes/test/e2e/framework.ExpectNoErrorWithOffset(0x1, {0x7c388e0, 0xc003b4f8f0}, {0xc003c509c0?, 0x0?, 0x0?}) test/e2e/framework/expect.go:76 +0x267 k8s.io/kubernetes/test/e2e/framework.ExpectNoError(...) test/e2e/framework/expect.go:43 k8s.io/kubernetes/test/e2e/framework.dumpEventsInNamespace(0xc00194b590, {0xc0030947e0, 0x14}) test/e2e/framework/util.go:901 +0x191 k8s.io/kubernetes/test/e2e/framework.DumpAllNamespaceInfo({0x7ca63f8, 0xc001c1a180}, {0xc0030947e0, 0x14}) test/e2e/framework/util.go:919 +0x8d k8s.io/kubernetes/test/e2e/framework.NewFramework.func1(0xc000a9c000, 0x1?) test/e2e/framework/framework.go:181 +0x8b k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc000a9c000) test/e2e/framework/framework.go:435 +0x21d
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-node\]\sProbing\scontainer\sshould\smark\sreadiness\son\spods\sto\sfalse\sand\sdisable\sliveness\sprobes\swhile\spod\sis\sin\sprogress\sof\sterminating$'
test/e2e/common/node/container_probe.go:709 k8s.io/kubernetes/test/e2e/common/node.glob..func2.24.2() test/e2e/common/node/container_probe.go:709 +0xdf reflect.Value.call({0x655bf80?, 0xc00259dec0?, 0xc001c53b90?}, {0x7361296, 0x4}, {0xc001c53c00, 0x0, 0xc001c53cd0?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x655bf80?, 0xc00259dec0?, 0x2?}, {0xc001c53c00?, 0x0?, 0x1?}) /usr/local/go/src/reflect/value.go:368 +0xbc k8s.io/kubernetes/vendor/github.com/onsi/gomega/internal.NewAsyncAssertion.func1() vendor/github.com/onsi/gomega/internal/async_assertion.go:48 +0xb1 k8s.io/kubernetes/vendor/github.com/onsi/gomega/internal.(*AsyncAssertion).pollActual(0xc001c53d68?) vendor/github.com/onsi/gomega/internal/async_assertion.go:134 +0x39 k8s.io/kubernetes/vendor/github.com/onsi/gomega/internal.(*AsyncAssertion).match(0xc002ff79f0, {0x7c55798, 0xa9ab400}, 0x0, {0xc000f574d0, 0x1, 0x1}) vendor/github.com/onsi/gomega/internal/async_assertion.go:225 +0x32c k8s.io/kubernetes/vendor/github.com/onsi/gomega/internal.(*AsyncAssertion).ShouldNot(0xc002ff79f0, {0x7c55798, 0xa9ab400}, {0xc000f574d0, 0x1, 0x1}) vendor/github.com/onsi/gomega/internal/async_assertion.go:114 +0x8a k8s.io/kubernetes/test/e2e/common/node.glob..func2.24() test/e2e/common/node/container_probe.go:720 +0x80cfrom junit_01.xml
{"msg":"FAILED [sig-node] Probing container should mark readiness on pods to false and disable liveness probes while pod is in progress of terminating","completed":5,"skipped":51,"failed":1,"failures":["[sig-node] Probing container should mark readiness on pods to false and disable liveness probes while pod is in progress of terminating"]} [BeforeEach] [sig-node] Probing container test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/21/23 13:16:14.085�[0m Jan 21 13:16:14.085: INFO: >>> kubeConfig: /root/.kube/config �[1mSTEP:�[0m Building a namespace api object, basename container-probe �[38;5;243m01/21/23 13:16:14.086�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/21/23 13:16:14.426�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/21/23 13:16:14.649�[0m [BeforeEach] [sig-node] Probing container test/e2e/common/node/container_probe.go:59 [It] should mark readiness on pods to false and disable liveness probes while pod is in progress of terminating test/e2e/common/node/container_probe.go:623 Jan 21 13:16:14.991: INFO: Waiting up to 5m0s for all pods (need at least 1) in namespace 'container-probe-2609' to be running and ready Jan 21 13:16:15.328: INFO: The status of Pod probe-test-713bd06b-f433-4a24-923e-c7ebe78a6fa2 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 21 13:16:15.328: INFO: 0 / 1 pods in namespace 'container-probe-2609' are running and ready (0 seconds elapsed) Jan 21 13:16:15.328: INFO: expected 0 pod replicas in namespace 'container-probe-2609', 0 are Running and Ready. Jan 21 13:16:15.328: INFO: POD NODE PHASE GRACE CONDITIONS Jan 21 13:16:15.328: INFO: probe-test-713bd06b-f433-4a24-923e-c7ebe78a6fa2 i-04e6f5db7ce157579 Pending [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-21 13:16:14 +0000 UTC }] Jan 21 13:16:15.328: INFO: Jan 21 13:16:17.666: INFO: The status of Pod probe-test-713bd06b-f433-4a24-923e-c7ebe78a6fa2 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 21 13:16:17.666: INFO: 0 / 1 pods in namespace 'container-probe-2609' are running and ready (2 seconds elapsed) Jan 21 13:16:17.666: INFO: expected 0 pod replicas in namespace 'container-probe-2609', 0 are Running and Ready. Jan 21 13:16:17.666: INFO: POD NODE PHASE GRACE CONDITIONS Jan 21 13:16:17.666: INFO: probe-test-713bd06b-f433-4a24-923e-c7ebe78a6fa2 i-04e6f5db7ce157579 Pending [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-21 13:16:14 +0000 UTC }] Jan 21 13:16:17.666: INFO: Jan 21 13:16:19.699: INFO: The status of Pod probe-test-713bd06b-f433-4a24-923e-c7ebe78a6fa2 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 21 13:16:19.699: INFO: 0 / 1 pods in namespace 'container-probe-2609' are running and ready (4 seconds elapsed) Jan 21 13:16:19.699: INFO: expected 0 pod replicas in namespace 'container-probe-2609', 0 are Running and Ready. Jan 21 13:16:19.699: INFO: POD NODE PHASE GRACE CONDITIONS Jan 21 13:16:19.699: INFO: probe-test-713bd06b-f433-4a24-923e-c7ebe78a6fa2 i-04e6f5db7ce157579 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-21 13:16:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-21 13:16:14 +0000 UTC ContainersNotReady containers with unready status: [probe-test-713bd06b-f433-4a24-923e-c7ebe78a6fa2]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-21 13:16:14 +0000 UTC ContainersNotReady containers with unready status: [probe-test-713bd06b-f433-4a24-923e-c7ebe78a6fa2]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-21 13:16:14 +0000 UTC }] Jan 21 13:16:19.699: INFO: Jan 21 13:16:21.680: INFO: The status of Pod probe-test-713bd06b-f433-4a24-923e-c7ebe78a6fa2 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 21 13:16:21.680: INFO: 0 / 1 pods in namespace 'container-probe-2609' are running and ready (6 seconds elapsed) Jan 21 13:16:21.680: INFO: expected 0 pod replicas in namespace 'container-probe-2609', 0 are Running and Ready. Jan 21 13:16:21.680: INFO: POD NODE PHASE GRACE CONDITIONS Jan 21 13:16:21.680: INFO: probe-test-713bd06b-f433-4a24-923e-c7ebe78a6fa2 i-04e6f5db7ce157579 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-21 13:16:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-21 13:16:14 +0000 UTC ContainersNotReady containers with unready status: [probe-test-713bd06b-f433-4a24-923e-c7ebe78a6fa2]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-21 13:16:14 +0000 UTC ContainersNotReady containers with unready status: [probe-test-713bd06b-f433-4a24-923e-c7ebe78a6fa2]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-21 13:16:14 +0000 UTC }] Jan 21 13:16:21.680: INFO: Jan 21 13:16:23.666: INFO: The status of Pod probe-test-713bd06b-f433-4a24-923e-c7ebe78a6fa2 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 21 13:16:23.666: INFO: 0 / 1 pods in namespace 'container-probe-2609' are running and ready (8 seconds elapsed) Jan 21 13:16:23.666: INFO: expected 0 pod replicas in namespace 'container-probe-2609', 0 are Running and Ready. Jan 21 13:16:23.666: INFO: POD NODE PHASE GRACE CONDITIONS Jan 21 13:16:23.666: INFO: probe-test-713bd06b-f433-4a24-923e-c7ebe78a6fa2 i-04e6f5db7ce157579 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-21 13:16:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-21 13:16:14 +0000 UTC ContainersNotReady containers with unready status: [probe-test-713bd06b-f433-4a24-923e-c7ebe78a6fa2]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-21 13:16:14 +0000 UTC ContainersNotReady containers with unready status: [probe-test-713bd06b-f433-4a24-923e-c7ebe78a6fa2]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-21 13:16:14 +0000 UTC }] Jan 21 13:16:23.666: INFO: Jan 21 13:16:25.669: INFO: The status of Pod probe-test-713bd06b-f433-4a24-923e-c7ebe78a6fa2 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 21 13:16:25.669: INFO: 0 / 1 pods in namespace 'container-probe-2609' are running and ready (10 seconds elapsed) Jan 21 13:16:25.669: INFO: expected 0 pod replicas in namespace 'container-probe-2609', 0 are Running and Ready. Jan 21 13:16:25.669: INFO: POD NODE PHASE GRACE CONDITIONS Jan 21 13:16:25.669: INFO: probe-test-713bd06b-f433-4a24-923e-c7ebe78a6fa2 i-04e6f5db7ce157579 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-21 13:16:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-21 13:16:14 +0000 UTC ContainersNotReady containers with unready status: [probe-test-713bd06b-f433-4a24-923e-c7ebe78a6fa2]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-21 13:16:14 +0000 UTC ContainersNotReady containers with unready status: [probe-test-713bd06b-f433-4a24-923e-c7ebe78a6fa2]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-21 13:16:14 +0000 UTC }] Jan 21 13:16:25.670: INFO: Jan 21 13:16:27.675: INFO: 1 / 1 pods in namespace 'container-probe-2609' are running and ready (12 seconds elapsed) Jan 21 13:16:27.675: INFO: expected 0 pod replicas in namespace 'container-probe-2609', 0 are Running and Ready. Jan 21 13:17:06.107: INFO: Unexpected error: <*url.Error | 0xc0020a55f0>: { Op: "Get", URL: "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces/container-probe-2609/events", Err: <*net.OpError | 0xc0029cae10>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc001ae89f0>{ IP: [52, 28, 228, 130], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc000997740>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Jan 21 13:17:06.107: FAIL: Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces/container-probe-2609/events": dial tcp 52.28.228.130:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/common/node.glob..func2.24.2() test/e2e/common/node/container_probe.go:709 +0xdf reflect.Value.call({0x655bf80?, 0xc00259dec0?, 0xc001c53b90?}, {0x7361296, 0x4}, {0xc001c53c00, 0x0, 0xc001c53cd0?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x655bf80?, 0xc00259dec0?, 0x2?}, {0xc001c53c00?, 0x0?, 0x1?}) /usr/local/go/src/reflect/value.go:368 +0xbc k8s.io/kubernetes/vendor/github.com/onsi/gomega/internal.NewAsyncAssertion.func1() vendor/github.com/onsi/gomega/internal/async_assertion.go:48 +0xb1 k8s.io/kubernetes/vendor/github.com/onsi/gomega/internal.(*AsyncAssertion).pollActual(0xc001c53d68?) vendor/github.com/onsi/gomega/internal/async_assertion.go:134 +0x39 k8s.io/kubernetes/vendor/github.com/onsi/gomega/internal.(*AsyncAssertion).match(0xc002ff79f0, {0x7c55798, 0xa9ab400}, 0x0, {0xc000f574d0, 0x1, 0x1}) vendor/github.com/onsi/gomega/internal/async_assertion.go:225 +0x32c k8s.io/kubernetes/vendor/github.com/onsi/gomega/internal.(*AsyncAssertion).ShouldNot(0xc002ff79f0, {0x7c55798, 0xa9ab400}, {0xc000f574d0, 0x1, 0x1}) vendor/github.com/onsi/gomega/internal/async_assertion.go:114 +0x8a k8s.io/kubernetes/test/e2e/common/node.glob..func2.24() test/e2e/common/node/container_probe.go:720 +0x80c [AfterEach] [sig-node] Probing container test/e2e/framework/framework.go:187 �[1mSTEP:�[0m Collecting events from namespace "container-probe-2609". �[38;5;243m01/21/23 13:17:06.108�[0m Jan 21 13:17:06.232: INFO: Unexpected error: failed to list events in namespace "container-probe-2609": <*url.Error | 0xc0027ec7b0>: { Op: "Get", URL: "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces/container-probe-2609/events", Err: <*net.OpError | 0xc002c85f90>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0020a5cb0>{ IP: [52, 28, 228, 130], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc000629b20>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Jan 21 13:17:06.232: FAIL: failed to list events in namespace "container-probe-2609": Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces/container-probe-2609/events": dial tcp 52.28.228.130:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework.dumpEventsInNamespace(0xc0005e5590, {0xc0033b0d98, 0x14}) test/e2e/framework/util.go:901 +0x191 k8s.io/kubernetes/test/e2e/framework.DumpAllNamespaceInfo({0x7ca63f8, 0xc0008e4300}, {0xc0033b0d98, 0x14}) test/e2e/framework/util.go:919 +0x8d k8s.io/kubernetes/test/e2e/framework.NewFramework.func1(0xc00051a2c0, 0x1?) test/e2e/framework/framework.go:181 +0x8b k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc00051a2c0) test/e2e/framework/framework.go:435 +0x21d �[1mSTEP:�[0m Destroying namespace "container-probe-2609" for this suite. �[38;5;243m01/21/23 13:17:06.232�[0m Jan 21 13:17:06.357: FAIL: Couldn't delete ns: "container-probe-2609": Delete "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces/container-probe-2609": dial tcp 52.28.228.130:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces/container-probe-2609", Err:(*net.OpError)(0xc003f34050)}) Full Stack Trace panic({0x6ea5bc0, 0xc0041a0900}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1() test/e2e/framework/ginkgowrapper/wrapper.go:73 +0x7d panic({0x6ea7de0, 0xc00078afc0}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail({0xc00077c6c0, 0x108}, {0xc0005e5048?, 0x735f76c?, 0xc0005e5068?}) test/e2e/framework/ginkgowrapper/wrapper.go:77 +0x197 k8s.io/kubernetes/test/e2e/framework.Fail({0xc00058d100, 0xf3}, {0xc0005e50e0?, 0xc003ddb200?, 0xc0005e5108?}) test/e2e/framework/log.go:63 +0x145 k8s.io/kubernetes/test/e2e/framework.ExpectNoErrorWithOffset(0x1, {0x7c388e0, 0xc0027ec7b0}, {0xc000629b80?, 0x0?, 0x0?}) test/e2e/framework/expect.go:76 +0x267 k8s.io/kubernetes/test/e2e/framework.ExpectNoError(...) test/e2e/framework/expect.go:43 k8s.io/kubernetes/test/e2e/framework.dumpEventsInNamespace(0xc0005e5590, {0xc0033b0d98, 0x14}) test/e2e/framework/util.go:901 +0x191 k8s.io/kubernetes/test/e2e/framework.DumpAllNamespaceInfo({0x7ca63f8, 0xc0008e4300}, {0xc0033b0d98, 0x14}) test/e2e/framework/util.go:919 +0x8d k8s.io/kubernetes/test/e2e/framework.NewFramework.func1(0xc00051a2c0, 0x1?) test/e2e/framework/framework.go:181 +0x8b k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc00051a2c0) test/e2e/framework/framework.go:435 +0x21d
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-node\]\sRuntimeClass\sshould\sreject\sa\sPod\srequesting\sa\sRuntimeClass\swith\san\sunconfigured\shandler\s\[NodeFeature\:RuntimeHandler\]$'
test/e2e/framework/framework.go:244 k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc000d1c580) test/e2e/framework/framework.go:244 +0x7bffrom junit_01.xml
{"msg":"FAILED [sig-node] RuntimeClass should reject a Pod requesting a RuntimeClass with an unconfigured handler [NodeFeature:RuntimeHandler]","completed":7,"skipped":56,"failed":2,"failures":["External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it","[sig-node] RuntimeClass should reject a Pod requesting a RuntimeClass with an unconfigured handler [NodeFeature:RuntimeHandler]"]} [BeforeEach] [sig-node] RuntimeClass test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/21/23 13:17:07.303�[0m Jan 21 13:17:07.303: INFO: >>> kubeConfig: /root/.kube/config �[1mSTEP:�[0m Building a namespace api object, basename runtimeclass �[38;5;243m01/21/23 13:17:07.304�[0m Jan 21 13:17:07.427: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:09.550: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:11.551: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:13.552: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:15.549: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:17.552: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:19.550: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:21.555: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:23.583: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:25.551: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:42.956: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:43.082: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:43.082: INFO: Unexpected error: <*errors.errorString | 0xc000315b30>: { s: "timed out waiting for the condition", } Jan 21 13:17:43.082: FAIL: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc000d1c580) test/e2e/framework/framework.go:244 +0x7bf [AfterEach] [sig-node] RuntimeClass test/e2e/framework/framework.go:187 Jan 21 13:17:43.083: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready Jan 21 13:17:43.207: FAIL: All nodes should be ready after test, Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 52.28.228.130:443: connect: connection refused Full Stack Trace
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-node\]\sSecurity\sContext\sWhen\screating\sa\scontainer\swith\srunAsUser\sshould\srun\sthe\scontainer\swith\suid\s65534\s\[LinuxOnly\]\s\[NodeConformance\]\s\[Conformance\]$'
test/e2e/framework/framework.go:244 k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc000d35a20) test/e2e/framework/framework.go:244 +0x7bffrom junit_01.xml
{"msg":"FAILED [sig-node] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","completed":6,"skipped":72,"failed":2,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should provide basic identity","[sig-node] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]"]} [BeforeEach] [sig-node] Security Context test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/21/23 13:17:08.436�[0m Jan 21 13:17:08.436: INFO: >>> kubeConfig: /root/.kube/config �[1mSTEP:�[0m Building a namespace api object, basename security-context-test �[38;5;243m01/21/23 13:17:08.437�[0m Jan 21 13:17:08.562: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:10.686: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:12.688: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:14.688: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:16.688: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:18.692: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:20.687: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:22.686: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:24.688: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:41.932: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:42.076: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:42.076: INFO: Unexpected error: <*errors.errorString | 0xc0001bf8b0>: { s: "timed out waiting for the condition", } Jan 21 13:17:42.076: FAIL: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc000d35a20) test/e2e/framework/framework.go:244 +0x7bf [AfterEach] [sig-node] Security Context test/e2e/framework/framework.go:187 Jan 21 13:17:42.076: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready Jan 21 13:17:42.214: FAIL: All nodes should be ready after test, Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 52.28.228.130:443: connect: connection refused Full Stack Trace
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-node\]\sSecurity\sContext\swhen\screating\scontainers\swith\sAllowPrivilegeEscalation\sshould\snot\sallow\sprivilege\sescalation\swhen\sfalse\s\[LinuxOnly\]\s\[NodeConformance\]\s\[Conformance\]$'
test/e2e/framework/framework.go:244 k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc000c27e40) test/e2e/framework/framework.go:244 +0x7bffrom junit_01.xml
{"msg":"FAILED [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","completed":7,"skipped":91,"failed":2,"failures":["[sig-network] DNS should provide DNS for services [Conformance]","[sig-node] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]"]} [BeforeEach] [sig-node] Security Context test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/21/23 13:17:06.37�[0m Jan 21 13:17:06.370: INFO: >>> kubeConfig: /root/.kube/config �[1mSTEP:�[0m Building a namespace api object, basename security-context-test �[38;5;243m01/21/23 13:17:06.372�[0m Jan 21 13:17:06.496: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:08.618: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:10.623: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:12.620: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:14.620: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:16.618: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:18.621: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:20.623: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:22.620: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:24.620: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:41.931: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:42.072: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:42.072: INFO: Unexpected error: <*errors.errorString | 0xc000205b90>: { s: "timed out waiting for the condition", } Jan 21 13:17:42.072: FAIL: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc000c27e40) test/e2e/framework/framework.go:244 +0x7bf [AfterEach] [sig-node] Security Context test/e2e/framework/framework.go:187 Jan 21 13:17:42.073: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready Jan 21 13:17:42.208: FAIL: All nodes should be ready after test, Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 52.28.228.130:443: connect: connection refused Full Stack Trace
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-storage\]\sCSI\sVolumes\s\[Driver\:\scsi\-hostpath\]\s\[Testpattern\:\sCSI\sEphemeral\-volume\s\(default\sfs\)\]\sephemeral\sshould\screate\sread\-only\sinline\sephemeral\svolume$'
test/e2e/framework/framework.go:244 k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc000d48160) test/e2e/framework/framework.go:244 +0x7bffrom junit_01.xml
{"msg":"FAILED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should create read-only inline ephemeral volume","completed":5,"skipped":44,"failed":2,"failures":["[sig-storage] Ephemeralstorage When pod refers to non-existent ephemeral storage should allow deletion of pod with invalid volume : projected","[sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should create read-only inline ephemeral volume"]} [BeforeEach] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/21/23 13:17:06.702�[0m Jan 21 13:17:06.703: INFO: >>> kubeConfig: /root/.kube/config �[1mSTEP:�[0m Building a namespace api object, basename ephemeral �[38;5;243m01/21/23 13:17:06.704�[0m Jan 21 13:17:06.828: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:08.949: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:10.949: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:12.952: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:14.951: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:16.951: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:18.951: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:20.952: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:22.954: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:24.952: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:42.447: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:42.571: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:42.571: INFO: Unexpected error: <*errors.errorString | 0xc00011dc60>: { s: "timed out waiting for the condition", } Jan 21 13:17:42.571: FAIL: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc000d48160) test/e2e/framework/framework.go:244 +0x7bf [AfterEach] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral test/e2e/framework/framework.go:187 Jan 21 13:17:42.571: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready Jan 21 13:17:42.699: FAIL: All nodes should be ready after test, Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 52.28.228.130:443: connect: connection refused Full Stack Trace
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-storage\]\sCSI\smock\svolume\sCSI\sattach\stest\susing\smock\sdriver\sshould\snot\srequire\sVolumeAttach\sfor\sdrivers\swithout\sattachment$'
test/e2e/framework/framework.go:244 k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc000de5ce0) test/e2e/framework/framework.go:244 +0x7bffrom junit_01.xml
{"msg":"FAILED [sig-storage] CSI mock volume CSI attach test using mock driver should not require VolumeAttach for drivers without attachment","completed":1,"skipped":28,"failed":2,"failures":["[sig-node] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","[sig-storage] CSI mock volume CSI attach test using mock driver should not require VolumeAttach for drivers without attachment"]} [BeforeEach] [sig-storage] CSI mock volume test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/21/23 13:17:06.014�[0m Jan 21 13:17:06.014: INFO: >>> kubeConfig: /root/.kube/config �[1mSTEP:�[0m Building a namespace api object, basename csi-mock-volumes �[38;5;243m01/21/23 13:17:06.015�[0m Jan 21 13:17:06.139: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:08.263: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:10.261: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:12.263: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:14.265: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:16.266: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:18.266: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:20.263: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:22.268: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:24.265: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:26.265: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:43.723: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:43.848: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:43.848: INFO: Unexpected error: <*errors.errorString | 0xc0000c5bd0>: { s: "timed out waiting for the condition", } Jan 21 13:17:43.848: FAIL: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc000de5ce0) test/e2e/framework/framework.go:244 +0x7bf [AfterEach] [sig-storage] CSI mock volume test/e2e/framework/framework.go:187 Jan 21 13:17:43.849: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready Jan 21 13:17:43.973: FAIL: All nodes should be ready after test, Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 52.28.228.130:443: connect: connection refused Full Stack Trace
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-storage\]\sCSI\smock\svolume\sCSI\sworkload\sinformation\susing\smock\sdriver\sshould\snot\sbe\spassed\swhen\spodInfoOnMount\=false$'
vendor/github.com/onsi/ginkgo/v2/internal/suite.go:605from junit_01.xml
{"msg":"FAILED [sig-storage] CSI mock volume CSI workload information using mock driver should not be passed when podInfoOnMount=false","completed":6,"skipped":32,"failed":1,"failures":["[sig-storage] CSI mock volume CSI workload information using mock driver should not be passed when podInfoOnMount=false"]} [BeforeEach] [sig-storage] CSI mock volume test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/21/23 13:16:17.308�[0m Jan 21 13:16:17.308: INFO: >>> kubeConfig: /root/.kube/config �[1mSTEP:�[0m Building a namespace api object, basename csi-mock-volumes �[38;5;243m01/21/23 13:16:17.309�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/21/23 13:16:17.644�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/21/23 13:16:17.865�[0m [It] should not be passed when podInfoOnMount=false test/e2e/storage/csi_mock_volume.go:517 �[1mSTEP:�[0m Building a driver namespace object, basename csi-mock-volumes-5030 �[38;5;243m01/21/23 13:16:18.087�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/21/23 13:16:18.422�[0m �[1mSTEP:�[0m deploying csi mock driver �[38;5;243m01/21/23 13:16:18.645�[0m Jan 21 13:16:19.105: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5030-7657/csi-attacher Jan 21 13:16:19.217: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-5030 Jan 21 13:16:19.217: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-5030 Jan 21 13:16:19.334: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-5030 Jan 21 13:16:19.446: INFO: creating *v1.Role: csi-mock-volumes-5030-7657/external-attacher-cfg-csi-mock-volumes-5030 Jan 21 13:16:19.563: INFO: creating *v1.RoleBinding: csi-mock-volumes-5030-7657/csi-attacher-role-cfg Jan 21 13:16:19.698: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5030-7657/csi-provisioner Jan 21 13:16:19.814: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-5030 Jan 21 13:16:19.814: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-5030 Jan 21 13:16:19.933: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-5030 Jan 21 13:16:20.047: INFO: creating *v1.Role: csi-mock-volumes-5030-7657/external-provisioner-cfg-csi-mock-volumes-5030 Jan 21 13:16:20.166: INFO: creating *v1.RoleBinding: csi-mock-volumes-5030-7657/csi-provisioner-role-cfg Jan 21 13:16:20.279: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5030-7657/csi-resizer Jan 21 13:16:20.396: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-5030 Jan 21 13:16:20.396: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-5030 Jan 21 13:16:20.512: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-5030 Jan 21 13:16:20.624: INFO: creating *v1.Role: csi-mock-volumes-5030-7657/external-resizer-cfg-csi-mock-volumes-5030 Jan 21 13:16:20.736: INFO: creating *v1.RoleBinding: csi-mock-volumes-5030-7657/csi-resizer-role-cfg Jan 21 13:16:20.850: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5030-7657/csi-snapshotter Jan 21 13:16:20.976: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-5030 Jan 21 13:16:20.976: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-5030 Jan 21 13:16:21.101: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-5030 Jan 21 13:16:21.214: INFO: creating *v1.Role: csi-mock-volumes-5030-7657/external-snapshotter-leaderelection-csi-mock-volumes-5030 Jan 21 13:16:21.331: INFO: creating *v1.RoleBinding: csi-mock-volumes-5030-7657/external-snapshotter-leaderelection Jan 21 13:16:21.445: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5030-7657/csi-mock Jan 21 13:16:21.560: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-5030 Jan 21 13:16:21.680: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-5030 Jan 21 13:16:21.794: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-5030 Jan 21 13:16:21.911: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-5030 Jan 21 13:16:22.024: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-5030 Jan 21 13:16:22.136: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-5030 Jan 21 13:16:22.249: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-5030 Jan 21 13:16:22.361: INFO: creating *v1.StatefulSet: csi-mock-volumes-5030-7657/csi-mockplugin Jan 21 13:16:22.481: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-5030 Jan 21 13:16:22.596: INFO: creating *v1.StatefulSet: csi-mock-volumes-5030-7657/csi-mockplugin-attacher Jan 21 13:16:22.741: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-5030" Jan 21 13:16:22.901: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-5030 to register on node i-0d8577dd20eb0d9bc �[1mSTEP:�[0m Creating pod �[38;5;243m01/21/23 13:16:25.161�[0m Jan 21 13:16:25.283: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Jan 21 13:16:25.423: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-fbc44] to have phase Bound Jan 21 13:16:25.535: INFO: PersistentVolumeClaim pvc-fbc44 found and phase=Bound (112.116672ms) Jan 21 13:16:25.876: INFO: Waiting up to 5m0s for pod "pvc-volume-tester-6rjfr" in namespace "csi-mock-volumes-5030" to be "running" Jan 21 13:16:26.001: INFO: Pod "pvc-volume-tester-6rjfr": Phase="Pending", Reason="", readiness=false. Elapsed: 125.415287ms Jan 21 13:16:28.116: INFO: Pod "pvc-volume-tester-6rjfr": Phase="Pending", Reason="", readiness=false. Elapsed: 2.240045367s Jan 21 13:16:30.114: INFO: Pod "pvc-volume-tester-6rjfr": Phase="Running", Reason="", readiness=true. Elapsed: 4.238003179s Jan 21 13:16:30.114: INFO: Pod "pvc-volume-tester-6rjfr" satisfied condition "running" �[1mSTEP:�[0m Deleting the previously created pod �[38;5;243m01/21/23 13:16:30.114�[0m Jan 21 13:16:30.114: INFO: Deleting pod "pvc-volume-tester-6rjfr" in namespace "csi-mock-volumes-5030" Jan 21 13:16:30.239: INFO: Wait up to 5m0s for pod "pvc-volume-tester-6rjfr" to be fully deleted �[1mSTEP:�[0m Checking CSI driver logs �[38;5;243m01/21/23 13:16:34.503�[0m Jan 21 13:16:34.734: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"ce98246a-998d-11ed-a1b7-c6d42c6d6db4","target_path":"/var/lib/kubelet/pods/efe15397-f8bc-42e0-b1da-4a6263c7789d/volumes/kubernetes.io~csi/pvc-f2028bd0-2707-4bc8-88a8-9701e22e1b40/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:} �[1mSTEP:�[0m Deleting pod pvc-volume-tester-6rjfr �[38;5;243m01/21/23 13:16:34.735�[0m Jan 21 13:16:34.735: INFO: Deleting pod "pvc-volume-tester-6rjfr" in namespace "csi-mock-volumes-5030" �[1mSTEP:�[0m Deleting claim pvc-fbc44 �[38;5;243m01/21/23 13:16:34.847�[0m Jan 21 13:16:35.094: INFO: Waiting up to 2m0s for PersistentVolume pvc-f2028bd0-2707-4bc8-88a8-9701e22e1b40 to get deleted Jan 21 13:16:35.209: INFO: PersistentVolume pvc-f2028bd0-2707-4bc8-88a8-9701e22e1b40 found and phase=Released (114.619957ms) Jan 21 13:16:37.331: INFO: PersistentVolume pvc-f2028bd0-2707-4bc8-88a8-9701e22e1b40 found and phase=Released (2.237201712s) Jan 21 13:16:39.444: INFO: PersistentVolume pvc-f2028bd0-2707-4bc8-88a8-9701e22e1b40 found and phase=Released (4.349917863s) Jan 21 13:16:41.558: INFO: PersistentVolume pvc-f2028bd0-2707-4bc8-88a8-9701e22e1b40 found and phase=Released (6.463430057s) Jan 21 13:16:43.670: INFO: PersistentVolume pvc-f2028bd0-2707-4bc8-88a8-9701e22e1b40 was removed �[1mSTEP:�[0m Deleting storageclass csi-mock-volumes-5030-schjqwc �[38;5;243m01/21/23 13:16:43.67�[0m �[1mSTEP:�[0m Cleaning up resources �[38;5;243m01/21/23 13:16:43.785�[0m �[1mSTEP:�[0m deleting the test namespace: csi-mock-volumes-5030 �[38;5;243m01/21/23 13:16:43.785�[0m �[1mSTEP:�[0m Waiting for namespaces [csi-mock-volumes-5030] to vanish �[38;5;243m01/21/23 13:16:43.898�[0m �[1mSTEP:�[0m uninstalling csi mock driver �[38;5;243m01/21/23 13:16:50.02�[0m Jan 21 13:16:50.020: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5030-7657/csi-attacher Jan 21 13:16:50.137: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-5030 Jan 21 13:16:50.251: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-5030 Jan 21 13:16:50.368: INFO: deleting *v1.Role: csi-mock-volumes-5030-7657/external-attacher-cfg-csi-mock-volumes-5030 Jan 21 13:16:50.532: INFO: deleting *v1.RoleBinding: csi-mock-volumes-5030-7657/csi-attacher-role-cfg Jan 21 13:16:50.647: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5030-7657/csi-provisioner Jan 21 13:16:50.761: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-5030 Jan 21 13:16:50.876: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-5030 Jan 21 13:16:50.991: INFO: deleting *v1.Role: csi-mock-volumes-5030-7657/external-provisioner-cfg-csi-mock-volumes-5030 Jan 21 13:16:51.107: INFO: deleting *v1.RoleBinding: csi-mock-volumes-5030-7657/csi-provisioner-role-cfg Jan 21 13:16:51.235: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5030-7657/csi-resizer Jan 21 13:16:51.348: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-5030 Jan 21 13:16:51.460: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-5030 Jan 21 13:16:51.573: INFO: deleting *v1.Role: csi-mock-volumes-5030-7657/external-resizer-cfg-csi-mock-volumes-5030 Jan 21 13:16:51.690: INFO: deleting *v1.RoleBinding: csi-mock-volumes-5030-7657/csi-resizer-role-cfg Jan 21 13:16:51.804: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5030-7657/csi-snapshotter Jan 21 13:16:51.925: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-5030 Jan 21 13:16:52.041: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-5030 Jan 21 13:16:52.161: INFO: deleting *v1.Role: csi-mock-volumes-5030-7657/external-snapshotter-leaderelection-csi-mock-volumes-5030 Jan 21 13:16:52.280: INFO: deleting *v1.RoleBinding: csi-mock-volumes-5030-7657/external-snapshotter-leaderelection Jan 21 13:16:52.399: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5030-7657/csi-mock Jan 21 13:16:52.518: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-5030 Jan 21 13:16:52.631: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-5030 Jan 21 13:16:52.747: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-5030 Jan 21 13:16:52.860: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-5030 Jan 21 13:16:52.974: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-5030 Jan 21 13:16:53.088: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-5030 Jan 21 13:16:53.211: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-5030 Jan 21 13:16:53.329: INFO: deleting *v1.StatefulSet: csi-mock-volumes-5030-7657/csi-mockplugin Jan 21 13:16:53.454: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-5030 Jan 21 13:16:53.579: INFO: deleting *v1.StatefulSet: csi-mock-volumes-5030-7657/csi-mockplugin-attacher �[1mSTEP:�[0m deleting the driver namespace: csi-mock-volumes-5030-7657 �[38;5;243m01/21/23 13:16:53.751�[0m �[1mSTEP:�[0m Waiting for namespaces [csi-mock-volumes-5030-7657] to vanish �[38;5;243m01/21/23 13:16:53.866�[0m Jan 21 13:17:05.996: INFO: error deleting namespace csi-mock-volumes-5030-7657: Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused [AfterEach] [sig-storage] CSI mock volume test/e2e/framework/framework.go:187 Jan 21 13:17:06.120: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready Jan 21 13:17:06.243: FAIL: All nodes should be ready after test, Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 52.28.228.130:443: connect: connection refused Full Stack Trace �[1mSTEP:�[0m Destroying namespace "csi-mock-volumes-5030-7657" for this suite. �[38;5;243m01/21/23 13:17:06.243�[0m �[1mSTEP:�[0m Collecting events from namespace "csi-mock-volumes-5030-7657". �[38;5;243m01/21/23 13:17:06.367�[0m Jan 21 13:17:06.496: INFO: Unexpected error: failed to list events in namespace "csi-mock-volumes-5030-7657": <*url.Error | 0xc00351d3b0>: { Op: "Get", URL: "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces/csi-mock-volumes-5030-7657/events", Err: <*net.OpError | 0xc00351a730>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0034b6810>{ IP: [52, 28, 228, 130], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc003b0aba0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Jan 21 13:17:06.496: FAIL: failed to list events in namespace "csi-mock-volumes-5030-7657": Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces/csi-mock-volumes-5030-7657/events": dial tcp 52.28.228.130:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework.dumpEventsInNamespace(0xc0025fc278, {0xc003e3aba0, 0x1a}) test/e2e/framework/util.go:901 +0x191 k8s.io/kubernetes/test/e2e/framework.DumpAllNamespaceInfo({0x7ca63f8, 0xc0017e0a80}, {0xc003e3aba0, 0x1a}) test/e2e/framework/util.go:919 +0x8d k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach.func1() test/e2e/framework/framework.go:402 +0x81d panic({0x6ea5bc0, 0xc001642ec0}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1() test/e2e/framework/ginkgowrapper/wrapper.go:73 +0x7d panic({0x6ea7de0, 0xc001626930}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail({0xc00333cb60, 0xd2}, {0xc002ab95a8?, 0x735f76c?, 0xc002ab95d0?}) test/e2e/framework/ginkgowrapper/wrapper.go:77 +0x197 k8s.io/kubernetes/test/e2e/framework.Failf({0x7432d64?, 0xc0017e0a80?}, {0xc002ab9890?, 0x738bf9c?, 0x10?}) test/e2e/framework/log.go:51 +0x12c k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc000aba000) test/e2e/framework/framework.go:483 +0xb8a
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-storage\]\sDynamic\sProvisioning\sInvalid\sAWS\sKMS\skey\sshould\sreport\san\serror\sand\screate\sno\sPV$'
test/e2e/storage/volume_provisioning.go:805 k8s.io/kubernetes/test/e2e/storage.glob..func32.6.1() test/e2e/storage/volume_provisioning.go:805 +0x5d6from junit_01.xml
{"msg":"FAILED [sig-storage] Dynamic Provisioning Invalid AWS KMS key should report an error and create no PV","completed":1,"skipped":44,"failed":1,"failures":["[sig-storage] Dynamic Provisioning Invalid AWS KMS key should report an error and create no PV"]} [BeforeEach] [sig-storage] Dynamic Provisioning test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/21/23 13:13:31.698�[0m Jan 21 13:13:31.699: INFO: >>> kubeConfig: /root/.kube/config �[1mSTEP:�[0m Building a namespace api object, basename volume-provisioning �[38;5;243m01/21/23 13:13:31.7�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/21/23 13:13:32.043�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/21/23 13:13:32.271�[0m [BeforeEach] [sig-storage] Dynamic Provisioning test/e2e/storage/volume_provisioning.go:146 [It] should report an error and create no PV test/e2e/storage/volume_provisioning.go:743 �[1mSTEP:�[0m creating a StorageClass �[38;5;243m01/21/23 13:13:32.497�[0m �[1mSTEP:�[0m Creating a StorageClass �[38;5;243m01/21/23 13:13:32.497�[0m �[1mSTEP:�[0m creating a claim object with a suffix for gluster dynamic provisioner �[38;5;243m01/21/23 13:13:32.726�[0m Jan 21 13:13:32.726: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Jan 21 13:17:05.127: INFO: Unexpected error: Error waiting for PVC to fail provisioning: Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces/volume-provisioning-4441/persistentvolumeclaims/pvc-tp5vl": dial tcp 52.28.228.130:443: connect: connection refused: <*url.Error | 0xc00268af30>: { Op: "Get", URL: "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces/volume-provisioning-4441/persistentvolumeclaims/pvc-tp5vl", Err: <*net.OpError | 0xc001b25540>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc004028000>{ IP: [52, 28, 228, 130], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc000963e20>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Jan 21 13:17:05.127: FAIL: Error waiting for PVC to fail provisioning: Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces/volume-provisioning-4441/persistentvolumeclaims/pvc-tp5vl": dial tcp 52.28.228.130:443: connect: connection refused: Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces/volume-provisioning-4441/persistentvolumeclaims/pvc-tp5vl": dial tcp 52.28.228.130:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/storage.glob..func32.6.1() test/e2e/storage/volume_provisioning.go:805 +0x5d6 Jan 21 13:17:05.127: INFO: deleting claim "volume-provisioning-4441"/"pvc-tp5vl" Jan 21 13:17:05.251: FAIL: Error deleting claim "pvc-tp5vl". Error: Delete "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces/volume-provisioning-4441/persistentvolumeclaims/pvc-tp5vl": dial tcp 52.28.228.130:443: connect: connection refused Full Stack Trace panic({0x6ea5bc0, 0xc001401b40}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1() test/e2e/framework/ginkgowrapper/wrapper.go:73 +0x7d panic({0x6ea7de0, 0xc0003a8690}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail({0xc0026b4200, 0x1ef}, {0xc0038efc30?, 0x735f76c?, 0xc0038efc50?}) test/e2e/framework/ginkgowrapper/wrapper.go:77 +0x197 k8s.io/kubernetes/test/e2e/framework.Fail({0xc0026e6000, 0x1da}, {0xc0038efcc8?, 0xc00075e0e0?, 0xc0038efcf0?}) test/e2e/framework/log.go:63 +0x145 k8s.io/kubernetes/test/e2e/framework.ExpectNoErrorWithOffset(0x1, {0x7c388e0, 0xc00268af30}, {0xc000963e60?, 0x0?, 0x0?}) test/e2e/framework/expect.go:76 +0x267 k8s.io/kubernetes/test/e2e/framework.ExpectNoError(...) test/e2e/framework/expect.go:43 k8s.io/kubernetes/test/e2e/storage.glob..func32.6.1() test/e2e/storage/volume_provisioning.go:805 +0x5d6 Jan 21 13:17:05.251: INFO: deleting storage class volume-provisioning-4441-invalid-awsnczs7 Jan 21 13:17:05.377: INFO: Unexpected error: delete storage class: <*url.Error | 0xc004028630>: { Op: "Delete", URL: "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/apis/storage.k8s.io/v1/storageclasses/volume-provisioning-4441-invalid-awsnczs7", Err: <*net.OpError | 0xc000a482d0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc002844120>{ IP: [52, 28, 228, 130], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc000e8e000>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Jan 21 13:17:05.377: FAIL: delete storage class: Delete "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/apis/storage.k8s.io/v1/storageclasses/volume-provisioning-4441-invalid-awsnczs7": dial tcp 52.28.228.130:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/storage/testsuites.SetupStorageClass.func2() test/e2e/storage/testsuites/provisioning.go:573 +0x1a8 panic({0x6ea5bc0, 0xc002304840}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1() test/e2e/framework/ginkgowrapper/wrapper.go:73 +0x7d panic({0x6ea7de0, 0xc0003e3650}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail({0xc002cd4240, 0x117}, {0xc0038ef5c8?, 0x735f76c?, 0xc0038ef5f0?}) test/e2e/framework/ginkgowrapper/wrapper.go:77 +0x197 k8s.io/kubernetes/test/e2e/framework.Failf({0x74076ea?, 0x9?}, {0xc0038ef6d8?, 0x0?, 0x0?}) test/e2e/framework/log.go:51 +0x12c k8s.io/kubernetes/test/e2e/storage.glob..func32.6.1.1() test/e2e/storage/volume_provisioning.go:771 +0x24f panic({0x6ea5bc0, 0xc001401b40}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1() test/e2e/framework/ginkgowrapper/wrapper.go:73 +0x7d panic({0x6ea7de0, 0xc0003a8690}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail({0xc0026b4200, 0x1ef}, {0xc0038efc30?, 0x735f76c?, 0xc0038efc50?}) test/e2e/framework/ginkgowrapper/wrapper.go:77 +0x197 k8s.io/kubernetes/test/e2e/framework.Fail({0xc0026e6000, 0x1da}, {0xc0038efcc8?, 0xc00075e0e0?, 0xc0038efcf0?}) test/e2e/framework/log.go:63 +0x145 k8s.io/kubernetes/test/e2e/framework.ExpectNoErrorWithOffset(0x1, {0x7c388e0, 0xc00268af30}, {0xc000963e60?, 0x0?, 0x0?}) test/e2e/framework/expect.go:76 +0x267 k8s.io/kubernetes/test/e2e/framework.ExpectNoError(...) test/e2e/framework/expect.go:43 k8s.io/kubernetes/test/e2e/storage.glob..func32.6.1() test/e2e/storage/volume_provisioning.go:805 +0x5d6 [AfterEach] [sig-storage] Dynamic Provisioning test/e2e/framework/framework.go:187 �[1mSTEP:�[0m Collecting events from namespace "volume-provisioning-4441". �[38;5;243m01/21/23 13:17:05.378�[0m Jan 21 13:17:05.503: INFO: Unexpected error: failed to list events in namespace "volume-provisioning-4441": <*url.Error | 0xc0040290e0>: { Op: "Get", URL: "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces/volume-provisioning-4441/events", Err: <*net.OpError | 0xc000a48780>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0040290b0>{ IP: [52, 28, 228, 130], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc000e8e7c0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Jan 21 13:17:05.503: FAIL: failed to list events in namespace "volume-provisioning-4441": Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces/volume-provisioning-4441/events": dial tcp 52.28.228.130:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework.dumpEventsInNamespace(0xc00281d590, {0xc002ec6c90, 0x18}) test/e2e/framework/util.go:901 +0x191 k8s.io/kubernetes/test/e2e/framework.DumpAllNamespaceInfo({0x7ca63f8, 0xc002d1bb00}, {0xc002ec6c90, 0x18}) test/e2e/framework/util.go:919 +0x8d k8s.io/kubernetes/test/e2e/framework.NewFramework.func1(0xc0013b0dc0, 0x2?) test/e2e/framework/framework.go:181 +0x8b k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc0013b0dc0) test/e2e/framework/framework.go:435 +0x21d �[1mSTEP:�[0m Destroying namespace "volume-provisioning-4441" for this suite. �[38;5;243m01/21/23 13:17:05.504�[0m Jan 21 13:17:05.643: FAIL: Couldn't delete ns: "volume-provisioning-4441": Delete "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces/volume-provisioning-4441": dial tcp 52.28.228.130:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces/volume-provisioning-4441", Err:(*net.OpError)(0xc000a48cd0)}) Full Stack Trace panic({0x6ea5bc0, 0xc001b0edc0}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1() test/e2e/framework/ginkgowrapper/wrapper.go:73 +0x7d panic({0x6ea7de0, 0xc0004c3ce0}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail({0xc0007f8360, 0x110}, {0xc00281d048?, 0x735f76c?, 0xc00281d068?}) test/e2e/framework/ginkgowrapper/wrapper.go:77 +0x197 k8s.io/kubernetes/test/e2e/framework.Fail({0xc00402f900, 0xfb}, {0xc00281d0e0?, 0xc0005c3ec0?, 0xc00281d108?}) test/e2e/framework/log.go:63 +0x145 k8s.io/kubernetes/test/e2e/framework.ExpectNoErrorWithOffset(0x1, {0x7c388e0, 0xc0040290e0}, {0xc000e8e880?, 0x0?, 0x0?}) test/e2e/framework/expect.go:76 +0x267 k8s.io/kubernetes/test/e2e/framework.ExpectNoError(...) test/e2e/framework/expect.go:43 k8s.io/kubernetes/test/e2e/framework.dumpEventsInNamespace(0xc00281d590, {0xc002ec6c90, 0x18}) test/e2e/framework/util.go:901 +0x191 k8s.io/kubernetes/test/e2e/framework.DumpAllNamespaceInfo({0x7ca63f8, 0xc002d1bb00}, {0xc002ec6c90, 0x18}) test/e2e/framework/util.go:919 +0x8d k8s.io/kubernetes/test/e2e/framework.NewFramework.func1(0xc0013b0dc0, 0x2?) test/e2e/framework/framework.go:181 +0x8b k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc0013b0dc0) test/e2e/framework/framework.go:435 +0x21d
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-storage\]\sEphemeralstorage\sWhen\spod\srefers\sto\snon\-existent\sephemeral\sstorage\sshould\sallow\sdeletion\sof\spod\swith\sinvalid\svolume\s\:\sprojected$'
test/e2e/storage/ephemeral_volume.go:66 k8s.io/kubernetes/test/e2e/storage.glob..func7.2.1() test/e2e/storage/ephemeral_volume.go:66 +0x1bcfrom junit_01.xml
{"msg":"FAILED [sig-storage] Ephemeralstorage When pod refers to non-existent ephemeral storage should allow deletion of pod with invalid volume : projected","completed":5,"skipped":43,"failed":1,"failures":["[sig-storage] Ephemeralstorage When pod refers to non-existent ephemeral storage should allow deletion of pod with invalid volume : projected"]} [BeforeEach] [sig-storage] Ephemeralstorage test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/21/23 13:16:13.134�[0m Jan 21 13:16:13.134: INFO: >>> kubeConfig: /root/.kube/config �[1mSTEP:�[0m Building a namespace api object, basename pv �[38;5;243m01/21/23 13:16:13.135�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/21/23 13:16:13.548�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/21/23 13:16:13.767�[0m [BeforeEach] [sig-storage] Ephemeralstorage test/e2e/storage/ephemeral_volume.go:51 [It] should allow deletion of pod with invalid volume : projected test/e2e/storage/ephemeral_volume.go:58 Jan 21 13:16:44.102: INFO: Deleting pod "pv-5934"/"pod-ephm-test-projected-n97g" Jan 21 13:16:44.102: INFO: Deleting pod "pod-ephm-test-projected-n97g" in namespace "pv-5934" Jan 21 13:16:44.218: INFO: Wait up to 5m0s for pod "pod-ephm-test-projected-n97g" to be fully deleted Jan 21 13:17:06.454: INFO: Encountered non-retryable error while getting pod pv-5934/pod-ephm-test-projected-n97g: Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces/pv-5934/pods/pod-ephm-test-projected-n97g": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:06.454: INFO: Unexpected error: <*errors.errorString | 0xc000e0a930>: { s: "pod \"pod-ephm-test-projected-n97g\" was not deleted: error while waiting for pod pv-5934/pod-ephm-test-projected-n97g not found: Get \"https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces/pv-5934/pods/pod-ephm-test-projected-n97g\": dial tcp 52.28.228.130:443: connect: connection refused", } Jan 21 13:17:06.454: FAIL: pod "pod-ephm-test-projected-n97g" was not deleted: error while waiting for pod pv-5934/pod-ephm-test-projected-n97g not found: Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces/pv-5934/pods/pod-ephm-test-projected-n97g": dial tcp 52.28.228.130:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/storage.glob..func7.2.1() test/e2e/storage/ephemeral_volume.go:66 +0x1bc [AfterEach] [sig-storage] Ephemeralstorage test/e2e/framework/framework.go:187 �[1mSTEP:�[0m Collecting events from namespace "pv-5934". �[38;5;243m01/21/23 13:17:06.455�[0m Jan 21 13:17:06.577: INFO: Unexpected error: failed to list events in namespace "pv-5934": <*url.Error | 0xc0033ff980>: { Op: "Get", URL: "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces/pv-5934/events", Err: <*net.OpError | 0xc003168e10>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0033ff950>{ IP: [52, 28, 228, 130], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc0034f2700>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Jan 21 13:17:06.577: FAIL: failed to list events in namespace "pv-5934": Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces/pv-5934/events": dial tcp 52.28.228.130:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework.dumpEventsInNamespace(0xc003635590, {0xc0035ff609, 0x7}) test/e2e/framework/util.go:901 +0x191 k8s.io/kubernetes/test/e2e/framework.DumpAllNamespaceInfo({0x7ca63f8, 0xc0035ea780}, {0xc0035ff609, 0x7}) test/e2e/framework/util.go:919 +0x8d k8s.io/kubernetes/test/e2e/framework.NewFramework.func1(0xc000db3600, 0x2?) test/e2e/framework/framework.go:181 +0x8b k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc000db3600) test/e2e/framework/framework.go:435 +0x21d �[1mSTEP:�[0m Destroying namespace "pv-5934" for this suite. �[38;5;243m01/21/23 13:17:06.577�[0m Jan 21 13:17:06.700: FAIL: Couldn't delete ns: "pv-5934": Delete "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces/pv-5934": dial tcp 52.28.228.130:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces/pv-5934", Err:(*net.OpError)(0xc003169270)}) Full Stack Trace panic({0x6ea5bc0, 0xc00372d780}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1() test/e2e/framework/ginkgowrapper/wrapper.go:73 +0x7d panic({0x6ea7de0, 0xc000b67b90}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail({0xc00353a2d0, 0xee}, {0xc003635048?, 0x735f76c?, 0xc003635068?}) test/e2e/framework/ginkgowrapper/wrapper.go:77 +0x197 k8s.io/kubernetes/test/e2e/framework.Fail({0xc00008a460, 0xd9}, {0xc0036350e0?, 0xc001fc62c0?, 0xc003635108?}) test/e2e/framework/log.go:63 +0x145 k8s.io/kubernetes/test/e2e/framework.ExpectNoErrorWithOffset(0x1, {0x7c388e0, 0xc0033ff980}, {0xc0034f2740?, 0x0?, 0x0?}) test/e2e/framework/expect.go:76 +0x267 k8s.io/kubernetes/test/e2e/framework.ExpectNoError(...) test/e2e/framework/expect.go:43 k8s.io/kubernetes/test/e2e/framework.dumpEventsInNamespace(0xc003635590, {0xc0035ff609, 0x7}) test/e2e/framework/util.go:901 +0x191 k8s.io/kubernetes/test/e2e/framework.DumpAllNamespaceInfo({0x7ca63f8, 0xc0035ea780}, {0xc0035ff609, 0x7}) test/e2e/framework/util.go:919 +0x8d k8s.io/kubernetes/test/e2e/framework.NewFramework.func1(0xc000db3600, 0x2?) test/e2e/framework/framework.go:181 +0x8b k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc000db3600) test/e2e/framework/framework.go:435 +0x21d
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-storage\]\sIn\-tree\sVolumes\s\[Driver\:\slocal\]\[LocalVolumeType\:\sblock\]\s\[Testpattern\:\sPre\-provisioned\sPV\s\(default\sfs\)\]\ssubPath\sshould\ssupport\sfile\sas\ssubpath\s\[LinuxOnly\]$'
test/e2e/storage/utils/local.go:171 k8s.io/kubernetes/test/e2e/storage/utils.(*ltrMgr).cleanupLocalVolumeBlock(0xc0034282a0, 0xc003439640) test/e2e/storage/utils/local.go:171 +0x111 k8s.io/kubernetes/test/e2e/storage/utils.(*ltrMgr).Remove(0x8?, 0xc0033d9740?) test/e2e/storage/utils/local.go:351 +0x69 k8s.io/kubernetes/test/e2e/storage/drivers.(*localVolume).DeleteVolume(0x0?) test/e2e/storage/drivers/in_tree.go:1953 +0x28 k8s.io/kubernetes/test/e2e/storage/utils.TryFunc(0x7ca63f8?) test/e2e/storage/utils/utils.go:714 +0x6d k8s.io/kubernetes/test/e2e/storage/framework.(*VolumeResource).CleanupResource(0xc003424300) test/e2e/storage/framework/volume_resource.go:231 +0xc89 k8s.io/kubernetes/test/e2e/storage/testsuites.(*subPathTestSuite).DefineTests.func2() test/e2e/storage/testsuites/subpath.go:178 +0x145 k8s.io/kubernetes/test/e2e/storage/testsuites.(*subPathTestSuite).DefineTests.func6() test/e2e/storage/testsuites/subpath.go:239 +0x1ebfrom junit_01.xml
{"msg":"FAILED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]","completed":7,"skipped":36,"failed":1,"failures":["[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]"]} [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/21/23 13:16:10.28�[0m Jan 21 13:16:10.280: INFO: >>> kubeConfig: /root/.kube/config �[1mSTEP:�[0m Building a namespace api object, basename provisioning �[38;5;243m01/21/23 13:16:10.281�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/21/23 13:16:10.619�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/21/23 13:16:10.842�[0m [It] should support file as subpath [LinuxOnly] test/e2e/storage/testsuites/subpath.go:231 Jan 21 13:16:11.180: INFO: In-tree plugin kubernetes.io/local-volume is not migrated, not validating any metrics �[1mSTEP:�[0m Creating block device on node "i-02ad0c8b16d8d1c65" using path "/tmp/local-driver-0ca6a0c4-662b-4eea-a92e-64f8ef5cd58b" �[38;5;243m01/21/23 13:16:11.18�[0m Jan 21 13:16:11.297: INFO: Waiting up to 5m0s for pod "hostexec-i-02ad0c8b16d8d1c65-w2g4j" in namespace "provisioning-3376" to be "running" Jan 21 13:16:11.410: INFO: Pod "hostexec-i-02ad0c8b16d8d1c65-w2g4j": Phase="Pending", Reason="", readiness=false. Elapsed: 112.362971ms Jan 21 13:16:13.523: INFO: Pod "hostexec-i-02ad0c8b16d8d1c65-w2g4j": Phase="Pending", Reason="", readiness=false. Elapsed: 2.226022763s Jan 21 13:16:15.523: INFO: Pod "hostexec-i-02ad0c8b16d8d1c65-w2g4j": Phase="Running", Reason="", readiness=true. Elapsed: 4.225222267s Jan 21 13:16:15.523: INFO: Pod "hostexec-i-02ad0c8b16d8d1c65-w2g4j" satisfied condition "running" Jan 21 13:16:15.523: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-driver-0ca6a0c4-662b-4eea-a92e-64f8ef5cd58b && dd if=/dev/zero of=/tmp/local-driver-0ca6a0c4-662b-4eea-a92e-64f8ef5cd58b/file bs=4096 count=5120 && losetup -f /tmp/local-driver-0ca6a0c4-662b-4eea-a92e-64f8ef5cd58b/file] Namespace:provisioning-3376 PodName:hostexec-i-02ad0c8b16d8d1c65-w2g4j ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jan 21 13:16:15.523: INFO: >>> kubeConfig: /root/.kube/config Jan 21 13:16:15.524: INFO: ExecWithOptions: Clientset creation Jan 21 13:16:15.524: INFO: ExecWithOptions: execute(POST https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces/provisioning-3376/pods/hostexec-i-02ad0c8b16d8d1c65-w2g4j/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=mkdir+-p+%2Ftmp%2Flocal-driver-0ca6a0c4-662b-4eea-a92e-64f8ef5cd58b+%26%26+dd+if%3D%2Fdev%2Fzero+of%3D%2Ftmp%2Flocal-driver-0ca6a0c4-662b-4eea-a92e-64f8ef5cd58b%2Ffile+bs%3D4096+count%3D5120+%26%26+losetup+-f+%2Ftmp%2Flocal-driver-0ca6a0c4-662b-4eea-a92e-64f8ef5cd58b%2Ffile&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) Jan 21 13:16:16.331: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-driver-0ca6a0c4-662b-4eea-a92e-64f8ef5cd58b/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:provisioning-3376 PodName:hostexec-i-02ad0c8b16d8d1c65-w2g4j ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jan 21 13:16:16.331: INFO: >>> kubeConfig: /root/.kube/config Jan 21 13:16:16.332: INFO: ExecWithOptions: Clientset creation Jan 21 13:16:16.332: INFO: ExecWithOptions: execute(POST https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces/provisioning-3376/pods/hostexec-i-02ad0c8b16d8d1c65-w2g4j/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=E2E_LOOP_DEV%3D%24%28losetup+%7C+grep+%2Ftmp%2Flocal-driver-0ca6a0c4-662b-4eea-a92e-64f8ef5cd58b%2Ffile+%7C+awk+%27%7B+print+%241+%7D%27%29+2%3E%261+%3E+%2Fdev%2Fnull+%26%26+echo+%24%7BE2E_LOOP_DEV%7D&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) Jan 21 13:16:17.120: INFO: Creating resource for pre-provisioned PV Jan 21 13:16:17.120: INFO: Creating PVC and PV �[1mSTEP:�[0m Creating a PVC followed by a PV �[38;5;243m01/21/23 13:16:17.12�[0m Jan 21 13:16:17.349: INFO: Waiting for PV local-ss25m to bind to PVC pvc-s5bkv Jan 21 13:16:17.349: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-s5bkv] to have phase Bound Jan 21 13:16:17.461: INFO: PersistentVolumeClaim pvc-s5bkv found but phase is Pending instead of Bound. Jan 21 13:16:19.576: INFO: PersistentVolumeClaim pvc-s5bkv found but phase is Pending instead of Bound. Jan 21 13:16:21.693: INFO: PersistentVolumeClaim pvc-s5bkv found but phase is Pending instead of Bound. Jan 21 13:16:23.807: INFO: PersistentVolumeClaim pvc-s5bkv found but phase is Pending instead of Bound. Jan 21 13:16:25.935: INFO: PersistentVolumeClaim pvc-s5bkv found and phase=Bound (8.586490576s) Jan 21 13:16:25.935: INFO: Waiting up to 3m0s for PersistentVolume local-ss25m to have phase Bound Jan 21 13:16:26.047: INFO: PersistentVolume local-ss25m found and phase=Bound (112.125672ms) �[1mSTEP:�[0m Creating pod pod-subpath-test-preprovisionedpv-cmr4 �[38;5;243m01/21/23 13:16:26.272�[0m �[1mSTEP:�[0m Creating a pod to test atomic-volume-subpath �[38;5;243m01/21/23 13:16:26.272�[0m Jan 21 13:16:26.388: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-cmr4" in namespace "provisioning-3376" to be "Succeeded or Failed" Jan 21 13:16:26.502: INFO: Pod "pod-subpath-test-preprovisionedpv-cmr4": Phase="Pending", Reason="", readiness=false. Elapsed: 114.030374ms Jan 21 13:16:28.617: INFO: Pod "pod-subpath-test-preprovisionedpv-cmr4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.229227152s Jan 21 13:16:30.616: INFO: Pod "pod-subpath-test-preprovisionedpv-cmr4": Phase="Running", Reason="", readiness=true. Elapsed: 4.22806939s Jan 21 13:16:32.615: INFO: Pod "pod-subpath-test-preprovisionedpv-cmr4": Phase="Running", Reason="", readiness=true. Elapsed: 6.227676605s Jan 21 13:16:34.657: INFO: Pod "pod-subpath-test-preprovisionedpv-cmr4": Phase="Running", Reason="", readiness=true. Elapsed: 8.268943145s Jan 21 13:16:36.623: INFO: Pod "pod-subpath-test-preprovisionedpv-cmr4": Phase="Running", Reason="", readiness=true. Elapsed: 10.235200713s Jan 21 13:16:38.645: INFO: Pod "pod-subpath-test-preprovisionedpv-cmr4": Phase="Running", Reason="", readiness=true. Elapsed: 12.257389651s Jan 21 13:16:40.616: INFO: Pod "pod-subpath-test-preprovisionedpv-cmr4": Phase="Running", Reason="", readiness=true. Elapsed: 14.227906235s Jan 21 13:16:42.615: INFO: Pod "pod-subpath-test-preprovisionedpv-cmr4": Phase="Running", Reason="", readiness=true. Elapsed: 16.226986952s Jan 21 13:16:44.615: INFO: Pod "pod-subpath-test-preprovisionedpv-cmr4": Phase="Running", Reason="", readiness=true. Elapsed: 18.22720213s Jan 21 13:16:46.615: INFO: Pod "pod-subpath-test-preprovisionedpv-cmr4": Phase="Running", Reason="", readiness=true. Elapsed: 20.22726393s Jan 21 13:16:48.614: INFO: Pod "pod-subpath-test-preprovisionedpv-cmr4": Phase="Running", Reason="", readiness=true. Elapsed: 22.226481258s Jan 21 13:16:50.617: INFO: Pod "pod-subpath-test-preprovisionedpv-cmr4": Phase="Running", Reason="", readiness=true. Elapsed: 24.229289563s Jan 21 13:16:52.614: INFO: Pod "pod-subpath-test-preprovisionedpv-cmr4": Phase="Running", Reason="", readiness=true. Elapsed: 26.226493975s Jan 21 13:16:54.614: INFO: Pod "pod-subpath-test-preprovisionedpv-cmr4": Phase="Running", Reason="", readiness=true. Elapsed: 28.226856378s Jan 21 13:16:56.616: INFO: Pod "pod-subpath-test-preprovisionedpv-cmr4": Phase="Running", Reason="", readiness=true. Elapsed: 30.227965496s Jan 21 13:16:58.615: INFO: Pod "pod-subpath-test-preprovisionedpv-cmr4": Phase="Running", Reason="", readiness=true. Elapsed: 32.227098875s Jan 21 13:17:00.618: INFO: Pod "pod-subpath-test-preprovisionedpv-cmr4": Phase="Running", Reason="", readiness=true. Elapsed: 34.230873075s Jan 21 13:17:02.618: INFO: Pod "pod-subpath-test-preprovisionedpv-cmr4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 36.230892807s �[1mSTEP:�[0m Saw pod success �[38;5;243m01/21/23 13:17:02.619�[0m Jan 21 13:17:02.619: INFO: Pod "pod-subpath-test-preprovisionedpv-cmr4" satisfied condition "Succeeded or Failed" Jan 21 13:17:02.731: INFO: Trying to get logs from node i-02ad0c8b16d8d1c65 pod pod-subpath-test-preprovisionedpv-cmr4 container test-container-subpath-preprovisionedpv-cmr4: <nil> �[1mSTEP:�[0m delete the pod �[38;5;243m01/21/23 13:17:02.845�[0m Jan 21 13:17:02.970: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-cmr4 to disappear Jan 21 13:17:03.085: INFO: Pod pod-subpath-test-preprovisionedpv-cmr4 no longer exists �[1mSTEP:�[0m Deleting pod pod-subpath-test-preprovisionedpv-cmr4 �[38;5;243m01/21/23 13:17:03.085�[0m Jan 21 13:17:03.085: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-cmr4" in namespace "provisioning-3376" �[1mSTEP:�[0m Deleting pod �[38;5;243m01/21/23 13:17:03.196�[0m Jan 21 13:17:03.196: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-cmr4" in namespace "provisioning-3376" �[1mSTEP:�[0m Deleting pv and pvc �[38;5;243m01/21/23 13:17:03.311�[0m Jan 21 13:17:03.312: INFO: Deleting PersistentVolumeClaim "pvc-s5bkv" Jan 21 13:17:03.433: INFO: Deleting PersistentVolume "local-ss25m" Jan 21 13:17:03.547: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-driver-0ca6a0c4-662b-4eea-a92e-64f8ef5cd58b/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:provisioning-3376 PodName:hostexec-i-02ad0c8b16d8d1c65-w2g4j ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jan 21 13:17:03.547: INFO: >>> kubeConfig: /root/.kube/config Jan 21 13:17:03.548: INFO: ExecWithOptions: Clientset creation Jan 21 13:17:03.548: INFO: ExecWithOptions: execute(POST https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces/provisioning-3376/pods/hostexec-i-02ad0c8b16d8d1c65-w2g4j/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=E2E_LOOP_DEV%3D%24%28losetup+%7C+grep+%2Ftmp%2Flocal-driver-0ca6a0c4-662b-4eea-a92e-64f8ef5cd58b%2Ffile+%7C+awk+%27%7B+print+%241+%7D%27%29+2%3E%261+%3E+%2Fdev%2Fnull+%26%26+echo+%24%7BE2E_LOOP_DEV%7D&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) �[1mSTEP:�[0m Tear down block device "/dev/loop0" on node "i-02ad0c8b16d8d1c65" at path /tmp/local-driver-0ca6a0c4-662b-4eea-a92e-64f8ef5cd58b/file �[38;5;243m01/21/23 13:17:04.365�[0m Jan 21 13:17:04.365: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop0] Namespace:provisioning-3376 PodName:hostexec-i-02ad0c8b16d8d1c65-w2g4j ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jan 21 13:17:04.365: INFO: >>> kubeConfig: /root/.kube/config Jan 21 13:17:04.366: INFO: ExecWithOptions: Clientset creation Jan 21 13:17:04.366: INFO: ExecWithOptions: execute(POST https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces/provisioning-3376/pods/hostexec-i-02ad0c8b16d8d1c65-w2g4j/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=losetup+-d+%2Fdev%2Floop0&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) �[1mSTEP:�[0m Removing the test directory /tmp/local-driver-0ca6a0c4-662b-4eea-a92e-64f8ef5cd58b �[38;5;243m01/21/23 13:17:05.123�[0m Jan 21 13:17:05.123: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-driver-0ca6a0c4-662b-4eea-a92e-64f8ef5cd58b] Namespace:provisioning-3376 PodName:hostexec-i-02ad0c8b16d8d1c65-w2g4j ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jan 21 13:17:05.123: INFO: >>> kubeConfig: /root/.kube/config Jan 21 13:17:05.124: INFO: ExecWithOptions: Clientset creation Jan 21 13:17:05.124: INFO: ExecWithOptions: execute(POST https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces/provisioning-3376/pods/hostexec-i-02ad0c8b16d8d1c65-w2g4j/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=rm+-r+%2Ftmp%2Flocal-driver-0ca6a0c4-662b-4eea-a92e-64f8ef5cd58b&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) Jan 21 13:17:05.249: INFO: exec i-02ad0c8b16d8d1c65: command: rm -r /tmp/local-driver-0ca6a0c4-662b-4eea-a92e-64f8ef5cd58b Jan 21 13:17:05.249: INFO: exec i-02ad0c8b16d8d1c65: stdout: "" Jan 21 13:17:05.249: INFO: exec i-02ad0c8b16d8d1c65: stderr: "" Jan 21 13:17:05.249: INFO: exec i-02ad0c8b16d8d1c65: exit code: 0 Jan 21 13:17:05.249: INFO: Unexpected error: <*errors.errorString | 0xc000ed0df0>: { s: "error sending request: Post \"https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces/provisioning-3376/pods/hostexec-i-02ad0c8b16d8d1c65-w2g4j/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=rm+-r+%2Ftmp%2Flocal-driver-0ca6a0c4-662b-4eea-a92e-64f8ef5cd58b&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true\": dial tcp 52.28.228.130:443: connect: connection refused", } Jan 21 13:17:05.249: FAIL: error sending request: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces/provisioning-3376/pods/hostexec-i-02ad0c8b16d8d1c65-w2g4j/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=rm+-r+%2Ftmp%2Flocal-driver-0ca6a0c4-662b-4eea-a92e-64f8ef5cd58b&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true": dial tcp 52.28.228.130:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/storage/utils.(*ltrMgr).cleanupLocalVolumeBlock(0xc0034282a0, 0xc003439640) test/e2e/storage/utils/local.go:171 +0x111 k8s.io/kubernetes/test/e2e/storage/utils.(*ltrMgr).Remove(0x8?, 0xc0033d9740?) test/e2e/storage/utils/local.go:351 +0x69 k8s.io/kubernetes/test/e2e/storage/drivers.(*localVolume).DeleteVolume(0x0?) test/e2e/storage/drivers/in_tree.go:1953 +0x28 k8s.io/kubernetes/test/e2e/storage/utils.TryFunc(0x7ca63f8?) test/e2e/storage/utils/utils.go:714 +0x6d k8s.io/kubernetes/test/e2e/storage/framework.(*VolumeResource).CleanupResource(0xc003424300) test/e2e/storage/framework/volume_resource.go:231 +0xc89 k8s.io/kubernetes/test/e2e/storage/testsuites.(*subPathTestSuite).DefineTests.func2() test/e2e/storage/testsuites/subpath.go:178 +0x145 k8s.io/kubernetes/test/e2e/storage/testsuites.(*subPathTestSuite).DefineTests.func6() test/e2e/storage/testsuites/subpath.go:239 +0x1eb �[1mSTEP:�[0m Deleting pod hostexec-i-02ad0c8b16d8d1c65-w2g4j in namespace provisioning-3376 �[38;5;243m01/21/23 13:17:05.25�[0m Jan 21 13:17:05.375: INFO: Unexpected error occurred: Delete "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces/provisioning-3376/pods/hostexec-i-02ad0c8b16d8d1c65-w2g4j": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:05.375: FAIL: failed to delete pod hostexec-i-02ad0c8b16d8d1c65-w2g4j in namespace provisioning-3376 Unexpected error: <*url.Error | 0xc0035b1050>: { Op: "Delete", URL: "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces/provisioning-3376/pods/hostexec-i-02ad0c8b16d8d1c65-w2g4j", Err: <*net.OpError | 0xc0031f2af0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0037dec00>{ IP: [52, 28, 228, 130], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc0030e99a0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Delete "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces/provisioning-3376/pods/hostexec-i-02ad0c8b16d8d1c65-w2g4j": dial tcp 52.28.228.130:443: connect: connection refused occurred Full Stack Trace k8s.io/kubernetes/test/e2e/framework/pod.DeletePodOrFail({0x7ca63f8, 0xc002108d80}, {0xc00342e390, 0x11}, {0xc00342b860, 0x22}) test/e2e/framework/pod/delete.go:47 +0x270 k8s.io/kubernetes/test/e2e/storage/utils.(*hostExecutor).Cleanup(0xc000607890) test/e2e/storage/utils/host_exec.go:187 +0x97 k8s.io/kubernetes/test/e2e/storage/drivers.(*localDriver).PrepareTest.func1() test/e2e/storage/drivers/in_tree.go:1932 +0x2c k8s.io/kubernetes/test/e2e/storage/utils.TryFunc(0x697b9c0?) test/e2e/storage/utils/utils.go:714 +0x6d k8s.io/kubernetes/test/e2e/storage/testsuites.(*subPathTestSuite).DefineTests.func2() test/e2e/storage/testsuites/subpath.go:182 +0x237 k8s.io/kubernetes/test/e2e/storage/testsuites.(*subPathTestSuite).DefineTests.func6() test/e2e/storage/testsuites/subpath.go:239 +0x1eb [AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath test/e2e/framework/framework.go:187 �[1mSTEP:�[0m Collecting events from namespace "provisioning-3376". �[38;5;243m01/21/23 13:17:05.376�[0m Jan 21 13:17:05.500: INFO: Unexpected error: failed to list events in namespace "provisioning-3376": <*url.Error | 0xc0037dee40>: { Op: "Get", URL: "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces/provisioning-3376/events", Err: <*net.OpError | 0xc0037dc550>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0035b19e0>{ IP: [52, 28, 228, 130], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc0034a71e0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Jan 21 13:17:05.501: FAIL: failed to list events in namespace "provisioning-3376": Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces/provisioning-3376/events": dial tcp 52.28.228.130:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework.dumpEventsInNamespace(0xc00150b590, {0xc000129a28, 0x11}) test/e2e/framework/util.go:901 +0x191 k8s.io/kubernetes/test/e2e/framework.DumpAllNamespaceInfo({0x7ca63f8, 0xc002108d80}, {0xc000129a28, 0x11}) test/e2e/framework/util.go:919 +0x8d k8s.io/kubernetes/test/e2e/framework.NewFramework.func1(0xc0013bbce0, 0x3?) test/e2e/framework/framework.go:181 +0x8b k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc0013bbce0) test/e2e/framework/framework.go:435 +0x21d �[1mSTEP:�[0m Destroying namespace "provisioning-3376" for this suite. �[38;5;243m01/21/23 13:17:05.501�[0m Jan 21 13:17:05.628: FAIL: Couldn't delete ns: "provisioning-3376": Delete "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces/provisioning-3376": dial tcp 52.28.228.130:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces/provisioning-3376", Err:(*net.OpError)(0xc0038a3f40)}) Full Stack Trace panic({0x6ea5bc0, 0xc0037b55c0}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1() test/e2e/framework/ginkgowrapper/wrapper.go:73 +0x7d panic({0x6ea7de0, 0xc000a45420}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail({0xc0037eb7a0, 0x102}, {0xc00150b048?, 0x735f76c?, 0xc00150b068?}) test/e2e/framework/ginkgowrapper/wrapper.go:77 +0x197 k8s.io/kubernetes/test/e2e/framework.Fail({0xc001706d20, 0xed}, {0xc00150b0e0?, 0xc003b82780?, 0xc00150b108?}) test/e2e/framework/log.go:63 +0x145 k8s.io/kubernetes/test/e2e/framework.ExpectNoErrorWithOffset(0x1, {0x7c388e0, 0xc0037dee40}, {0xc0034a7220?, 0x0?, 0x0?}) test/e2e/framework/expect.go:76 +0x267 k8s.io/kubernetes/test/e2e/framework.ExpectNoError(...) test/e2e/framework/expect.go:43 k8s.io/kubernetes/test/e2e/framework.dumpEventsInNamespace(0xc00150b590, {0xc000129a28, 0x11}) test/e2e/framework/util.go:901 +0x191 k8s.io/kubernetes/test/e2e/framework.DumpAllNamespaceInfo({0x7ca63f8, 0xc002108d80}, {0xc000129a28, 0x11}) test/e2e/framework/util.go:919 +0x8d k8s.io/kubernetes/test/e2e/framework.NewFramework.func1(0xc0013bbce0, 0x3?) test/e2e/framework/framework.go:181 +0x8b k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc0013bbce0) test/e2e/framework/framework.go:435 +0x21d
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-storage\]\sIn\-tree\sVolumes\s\[Driver\:\slocal\]\[LocalVolumeType\:\sblockfs\]\s\[Testpattern\:\sPre\-provisioned\sPV\s\(default\sfs\)\]\ssubPath\sshould\ssupport\sfile\sas\ssubpath\s\[LinuxOnly\]$'
test/e2e/framework/framework.go:244 k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc0014254a0) test/e2e/framework/framework.go:244 +0x7bffrom junit_01.xml
{"msg":"FAILED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]","completed":6,"skipped":35,"failed":2,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]"]} [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/21/23 13:17:05.659�[0m Jan 21 13:17:05.659: INFO: >>> kubeConfig: /root/.kube/config �[1mSTEP:�[0m Building a namespace api object, basename provisioning �[38;5;243m01/21/23 13:17:05.66�[0m Jan 21 13:17:05.783: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:07.909: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:09.906: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:11.908: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:13.907: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:15.908: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:17.906: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:19.905: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:21.911: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:23.910: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:25.908: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:43.211: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:43.350: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:43.350: INFO: Unexpected error: <*errors.errorString | 0xc000207ba0>: { s: "timed out waiting for the condition", } Jan 21 13:17:43.350: FAIL: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc0014254a0) test/e2e/framework/framework.go:244 +0x7bf [AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath test/e2e/framework/framework.go:187 Jan 21 13:17:43.350: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready Jan 21 13:17:43.472: FAIL: All nodes should be ready after test, Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 52.28.228.130:443: connect: connection refused Full Stack Trace
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-storage\]\sIn\-tree\sVolumes\s\[Driver\:\slocal\]\[LocalVolumeType\:\sdir\-link\-bindmounted\]\s\[Testpattern\:\sPre\-provisioned\sPV\s\(default\sfs\)\]\ssubPath\sshould\ssupport\sreadOnly\sdirectory\sspecified\sin\sthe\svolumeMount$'
test/e2e/storage/utils/host_exec.go:110 k8s.io/kubernetes/test/e2e/storage/utils.(*hostExecutor).launchNodeExecPod(0x7622fd8?, {0xc003045518, 0x13}) test/e2e/storage/utils/host_exec.go:110 +0x445 k8s.io/kubernetes/test/e2e/storage/utils.(*hostExecutor).exec(0xc001435bd0, {0xc003432b00, 0x151}, 0xc003b56580) test/e2e/storage/utils/host_exec.go:136 +0x110 k8s.io/kubernetes/test/e2e/storage/utils.(*hostExecutor).IssueCommandWithResult(0x5?, {0xc003432b00?, 0xc003432b00?}, 0xc00097d6c0?) test/e2e/storage/utils/host_exec.go:169 +0x33 k8s.io/kubernetes/test/e2e/storage/utils.(*hostExecutor).IssueCommand(0x745d74a?, {0xc003432b00?, 0xc0014b9858?}, 0x5?) test/e2e/storage/utils/host_exec.go:178 +0x1e k8s.io/kubernetes/test/e2e/storage/utils.(*ltrMgr).setupLocalVolumeDirectoryLinkBindMounted(0xc003407860, 0xc003b56580, 0xb?) test/e2e/storage/utils/local.go:258 +0x182 k8s.io/kubernetes/test/e2e/storage/utils.(*ltrMgr).Create(0x6507f80?, 0xc003b56580, {0x73a48c0, 0x14}, 0x0) test/e2e/storage/utils/local.go:318 +0x1b4 k8s.io/kubernetes/test/e2e/storage/drivers.(*localDriver).CreateVolume(0xc001306900, 0xc003165500, {0x738a3ac, 0x10}) test/e2e/storage/drivers/in_tree.go:1944 +0xd8 k8s.io/kubernetes/test/e2e/storage/framework.CreateVolume({0x7c52078, 0xc001306900}, 0xc001658000?, {0x738a3ac, 0x10}) test/e2e/storage/framework/driver_operations.go:43 +0xd2 k8s.io/kubernetes/test/e2e/storage/framework.CreateVolumeResource({0x7c52078, 0xc001306900}, 0xc003165500, {{0x73f0963, 0x1f}, {0x0, 0x0}, {0x738a3ac, 0x10}, {0x0, ...}, ...}, ...) test/e2e/storage/framework/volume_resource.go:65 +0x225 k8s.io/kubernetes/test/e2e/storage/testsuites.(*subPathTestSuite).DefineTests.func1() test/e2e/storage/testsuites/subpath.go:128 +0x28e k8s.io/kubernetes/test/e2e/storage/testsuites.(*subPathTestSuite).DefineTests.func16() test/e2e/storage/testsuites/subpath.go:367 +0x4dfrom junit_01.xml
{"msg":"FAILED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount","completed":7,"skipped":56,"failed":1,"failures":["[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount"]} [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/21/23 13:17:01.78�[0m Jan 21 13:17:01.780: INFO: >>> kubeConfig: /root/.kube/config �[1mSTEP:�[0m Building a namespace api object, basename provisioning �[38;5;243m01/21/23 13:17:01.781�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/21/23 13:17:02.122�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/21/23 13:17:02.352�[0m [It] should support readOnly directory specified in the volumeMount test/e2e/storage/testsuites/subpath.go:366 Jan 21 13:17:02.691: INFO: In-tree plugin kubernetes.io/local-volume is not migrated, not validating any metrics Jan 21 13:17:02.810: INFO: Waiting up to 5m0s for pod "hostexec-i-0d8577dd20eb0d9bc-7fpg4" in namespace "provisioning-3857" to be "running" Jan 21 13:17:02.923: INFO: Pod "hostexec-i-0d8577dd20eb0d9bc-7fpg4": Phase="Pending", Reason="", readiness=false. Elapsed: 113.430143ms Jan 21 13:17:05.125: INFO: Encountered non-retryable error while getting pod provisioning-3857/hostexec-i-0d8577dd20eb0d9bc-7fpg4: Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces/provisioning-3857/pods/hostexec-i-0d8577dd20eb0d9bc-7fpg4": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:05.125: INFO: Unexpected error: <*fmt.wrapError | 0xc003390f40>: { msg: "error while waiting for pod provisioning-3857/hostexec-i-0d8577dd20eb0d9bc-7fpg4 to be running: Get \"https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces/provisioning-3857/pods/hostexec-i-0d8577dd20eb0d9bc-7fpg4\": dial tcp 52.28.228.130:443: connect: connection refused", err: <*url.Error | 0xc0033ecde0>{ Op: "Get", URL: "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces/provisioning-3857/pods/hostexec-i-0d8577dd20eb0d9bc-7fpg4", Err: <*net.OpError | 0xc002fc34a0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc003b66030>{ IP: [52, 28, 228, 130], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc003390f00>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, }, } Jan 21 13:17:05.125: FAIL: error while waiting for pod provisioning-3857/hostexec-i-0d8577dd20eb0d9bc-7fpg4 to be running: Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces/provisioning-3857/pods/hostexec-i-0d8577dd20eb0d9bc-7fpg4": dial tcp 52.28.228.130:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/storage/utils.(*hostExecutor).launchNodeExecPod(0x7622fd8?, {0xc003045518, 0x13}) test/e2e/storage/utils/host_exec.go:110 +0x445 k8s.io/kubernetes/test/e2e/storage/utils.(*hostExecutor).exec(0xc001435bd0, {0xc003432b00, 0x151}, 0xc003b56580) test/e2e/storage/utils/host_exec.go:136 +0x110 k8s.io/kubernetes/test/e2e/storage/utils.(*hostExecutor).IssueCommandWithResult(0x5?, {0xc003432b00?, 0xc003432b00?}, 0xc00097d6c0?) test/e2e/storage/utils/host_exec.go:169 +0x33 k8s.io/kubernetes/test/e2e/storage/utils.(*hostExecutor).IssueCommand(0x745d74a?, {0xc003432b00?, 0xc0014b9858?}, 0x5?) test/e2e/storage/utils/host_exec.go:178 +0x1e k8s.io/kubernetes/test/e2e/storage/utils.(*ltrMgr).setupLocalVolumeDirectoryLinkBindMounted(0xc003407860, 0xc003b56580, 0xb?) test/e2e/storage/utils/local.go:258 +0x182 k8s.io/kubernetes/test/e2e/storage/utils.(*ltrMgr).Create(0x6507f80?, 0xc003b56580, {0x73a48c0, 0x14}, 0x0) test/e2e/storage/utils/local.go:318 +0x1b4 k8s.io/kubernetes/test/e2e/storage/drivers.(*localDriver).CreateVolume(0xc001306900, 0xc003165500, {0x738a3ac, 0x10}) test/e2e/storage/drivers/in_tree.go:1944 +0xd8 k8s.io/kubernetes/test/e2e/storage/framework.CreateVolume({0x7c52078, 0xc001306900}, 0xc001658000?, {0x738a3ac, 0x10}) test/e2e/storage/framework/driver_operations.go:43 +0xd2 k8s.io/kubernetes/test/e2e/storage/framework.CreateVolumeResource({0x7c52078, 0xc001306900}, 0xc003165500, {{0x73f0963, 0x1f}, {0x0, 0x0}, {0x738a3ac, 0x10}, {0x0, ...}, ...}, ...) test/e2e/storage/framework/volume_resource.go:65 +0x225 k8s.io/kubernetes/test/e2e/storage/testsuites.(*subPathTestSuite).DefineTests.func1() test/e2e/storage/testsuites/subpath.go:128 +0x28e k8s.io/kubernetes/test/e2e/storage/testsuites.(*subPathTestSuite).DefineTests.func16() test/e2e/storage/testsuites/subpath.go:367 +0x4d [AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath test/e2e/framework/framework.go:187 �[1mSTEP:�[0m Collecting events from namespace "provisioning-3857". �[38;5;243m01/21/23 13:17:05.126�[0m Jan 21 13:17:05.251: INFO: Unexpected error: failed to list events in namespace "provisioning-3857": <*url.Error | 0xc003b66540>: { Op: "Get", URL: "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces/provisioning-3857/events", Err: <*net.OpError | 0xc003422820>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0033ed710>{ IP: [52, 28, 228, 130], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc003133ea0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Jan 21 13:17:05.251: FAIL: failed to list events in namespace "provisioning-3857": Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces/provisioning-3857/events": dial tcp 52.28.228.130:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework.dumpEventsInNamespace(0xc001639590, {0xc0029b9140, 0x11}) test/e2e/framework/util.go:901 +0x191 k8s.io/kubernetes/test/e2e/framework.DumpAllNamespaceInfo({0x7ca63f8, 0xc002993680}, {0xc0029b9140, 0x11}) test/e2e/framework/util.go:919 +0x8d k8s.io/kubernetes/test/e2e/framework.NewFramework.func1(0xc0013cc840, 0x3?) test/e2e/framework/framework.go:181 +0x8b k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc0013cc840) test/e2e/framework/framework.go:435 +0x21d �[1mSTEP:�[0m Destroying namespace "provisioning-3857" for this suite. �[38;5;243m01/21/23 13:17:05.252�[0m Jan 21 13:17:05.375: FAIL: Couldn't delete ns: "provisioning-3857": Delete "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces/provisioning-3857": dial tcp 52.28.228.130:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces/provisioning-3857", Err:(*net.OpError)(0xc002fc3a40)}) Full Stack Trace panic({0x6ea5bc0, 0xc003b47340}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1() test/e2e/framework/ginkgowrapper/wrapper.go:73 +0x7d panic({0x6ea7de0, 0xc000775420}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail({0xc002d6fb00, 0x102}, {0xc001639048?, 0x735f76c?, 0xc001639068?}) test/e2e/framework/ginkgowrapper/wrapper.go:77 +0x197 k8s.io/kubernetes/test/e2e/framework.Fail({0xc0030cc690, 0xed}, {0xc0016390e0?, 0xc003b60fc0?, 0xc001639108?}) test/e2e/framework/log.go:63 +0x145 k8s.io/kubernetes/test/e2e/framework.ExpectNoErrorWithOffset(0x1, {0x7c388e0, 0xc003b66540}, {0xc003133ee0?, 0x0?, 0x0?}) test/e2e/framework/expect.go:76 +0x267 k8s.io/kubernetes/test/e2e/framework.ExpectNoError(...) test/e2e/framework/expect.go:43 k8s.io/kubernetes/test/e2e/framework.dumpEventsInNamespace(0xc001639590, {0xc0029b9140, 0x11}) test/e2e/framework/util.go:901 +0x191 k8s.io/kubernetes/test/e2e/framework.DumpAllNamespaceInfo({0x7ca63f8, 0xc002993680}, {0xc0029b9140, 0x11}) test/e2e/framework/util.go:919 +0x8d k8s.io/kubernetes/test/e2e/framework.NewFramework.func1(0xc0013cc840, 0x3?) test/e2e/framework/framework.go:181 +0x8b k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc0013cc840) test/e2e/framework/framework.go:435 +0x21d
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-storage\]\sIn\-tree\sVolumes\s\[Driver\:\slocal\]\[LocalVolumeType\:\stmpfs\]\s\[Testpattern\:\sPre\-provisioned\sPV\s\(default\sfs\)\]\ssubPath\sshould\ssupport\sreadOnly\sfile\sspecified\sin\sthe\svolumeMount\s\[LinuxOnly\]$'
test/e2e/framework/framework.go:244 k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc0012dc6e0) test/e2e/framework/framework.go:244 +0x7bffrom junit_01.xml
{"msg":"FAILED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","completed":5,"skipped":60,"failed":2,"failures":["[sig-node] Probing container should mark readiness on pods to false and disable liveness probes while pod is in progress of terminating","[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]"]} [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/21/23 13:17:06.372�[0m Jan 21 13:17:06.372: INFO: >>> kubeConfig: /root/.kube/config �[1mSTEP:�[0m Building a namespace api object, basename provisioning �[38;5;243m01/21/23 13:17:06.373�[0m Jan 21 13:17:06.496: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:08.618: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:10.624: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:12.619: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:14.620: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:16.618: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:18.621: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:20.626: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:22.620: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:24.620: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:41.933: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:42.076: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:42.076: INFO: Unexpected error: <*errors.errorString | 0xc0001eb920>: { s: "timed out waiting for the condition", } Jan 21 13:17:42.076: FAIL: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc0012dc6e0) test/e2e/framework/framework.go:244 +0x7bf [AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath test/e2e/framework/framework.go:187 Jan 21 13:17:42.077: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready Jan 21 13:17:42.203: FAIL: All nodes should be ready after test, Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 52.28.228.130:443: connect: connection refused Full Stack Trace
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-storage\]\sPersistentVolumes\-expansion\s\sloopback\slocal\sblock\svolume\sshould\ssupport\sonline\sexpansion\son\snode$'
test/e2e/storage/local_volume_resize.go:123 k8s.io/kubernetes/test/e2e/storage.glob..func19.1.3() test/e2e/storage/local_volume_resize.go:123 +0x829from junit_01.xml
{"msg":"FAILED [sig-storage] PersistentVolumes-expansion loopback local block volume should support online expansion on node","completed":13,"skipped":95,"failed":1,"failures":["[sig-storage] PersistentVolumes-expansion loopback local block volume should support online expansion on node"]} [BeforeEach] [sig-storage] PersistentVolumes-expansion test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/21/23 13:16:19.394�[0m Jan 21 13:16:19.394: INFO: >>> kubeConfig: /root/.kube/config �[1mSTEP:�[0m Building a namespace api object, basename persistent-local-volumes-expansion �[38;5;243m01/21/23 13:16:19.395�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/21/23 13:16:19.755�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/21/23 13:16:19.98�[0m [BeforeEach] loopback local block volume test/e2e/storage/local_volume_resize.go:54 �[1mSTEP:�[0m Initializing test volumes �[38;5;243m01/21/23 13:16:20.462�[0m �[1mSTEP:�[0m Creating block device on node "i-04e6f5db7ce157579" using path "/tmp/local-volume-test-d664a1ac-0d7e-48a2-8f34-edf1bc9023a2" �[38;5;243m01/21/23 13:16:20.462�[0m Jan 21 13:16:20.591: INFO: Waiting up to 5m0s for pod "hostexec-i-04e6f5db7ce157579-v6m4x" in namespace "persistent-local-volumes-expansion-4029" to be "running" Jan 21 13:16:20.704: INFO: Pod "hostexec-i-04e6f5db7ce157579-v6m4x": Phase="Pending", Reason="", readiness=false. Elapsed: 113.302818ms Jan 21 13:16:22.872: INFO: Pod "hostexec-i-04e6f5db7ce157579-v6m4x": Phase="Pending", Reason="", readiness=false. Elapsed: 2.281361199s Jan 21 13:16:24.818: INFO: Pod "hostexec-i-04e6f5db7ce157579-v6m4x": Phase="Running", Reason="", readiness=true. Elapsed: 4.227131608s Jan 21 13:16:24.818: INFO: Pod "hostexec-i-04e6f5db7ce157579-v6m4x" satisfied condition "running" Jan 21 13:16:24.818: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-d664a1ac-0d7e-48a2-8f34-edf1bc9023a2 && dd if=/dev/zero of=/tmp/local-volume-test-d664a1ac-0d7e-48a2-8f34-edf1bc9023a2/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-d664a1ac-0d7e-48a2-8f34-edf1bc9023a2/file] Namespace:persistent-local-volumes-expansion-4029 PodName:hostexec-i-04e6f5db7ce157579-v6m4x ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jan 21 13:16:24.818: INFO: >>> kubeConfig: /root/.kube/config Jan 21 13:16:24.819: INFO: ExecWithOptions: Clientset creation Jan 21 13:16:24.819: INFO: ExecWithOptions: execute(POST https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces/persistent-local-volumes-expansion-4029/pods/hostexec-i-04e6f5db7ce157579-v6m4x/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=mkdir+-p+%2Ftmp%2Flocal-volume-test-d664a1ac-0d7e-48a2-8f34-edf1bc9023a2+%26%26+dd+if%3D%2Fdev%2Fzero+of%3D%2Ftmp%2Flocal-volume-test-d664a1ac-0d7e-48a2-8f34-edf1bc9023a2%2Ffile+bs%3D4096+count%3D5120+%26%26+losetup+-f+%2Ftmp%2Flocal-volume-test-d664a1ac-0d7e-48a2-8f34-edf1bc9023a2%2Ffile&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) Jan 21 13:16:25.751: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-d664a1ac-0d7e-48a2-8f34-edf1bc9023a2/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-expansion-4029 PodName:hostexec-i-04e6f5db7ce157579-v6m4x ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jan 21 13:16:25.751: INFO: >>> kubeConfig: /root/.kube/config Jan 21 13:16:25.751: INFO: ExecWithOptions: Clientset creation Jan 21 13:16:25.751: INFO: ExecWithOptions: execute(POST https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces/persistent-local-volumes-expansion-4029/pods/hostexec-i-04e6f5db7ce157579-v6m4x/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=E2E_LOOP_DEV%3D%24%28losetup+%7C+grep+%2Ftmp%2Flocal-volume-test-d664a1ac-0d7e-48a2-8f34-edf1bc9023a2%2Ffile+%7C+awk+%27%7B+print+%241+%7D%27%29+2%3E%261+%3E+%2Fdev%2Fnull+%26%26+echo+%24%7BE2E_LOOP_DEV%7D&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) Jan 21 13:16:26.583: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkfs -t ext4 /dev/loop0 && mount -t ext4 /dev/loop0 /tmp/local-volume-test-d664a1ac-0d7e-48a2-8f34-edf1bc9023a2 && chmod o+rwx /tmp/local-volume-test-d664a1ac-0d7e-48a2-8f34-edf1bc9023a2] Namespace:persistent-local-volumes-expansion-4029 PodName:hostexec-i-04e6f5db7ce157579-v6m4x ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jan 21 13:16:26.583: INFO: >>> kubeConfig: /root/.kube/config Jan 21 13:16:26.584: INFO: ExecWithOptions: Clientset creation Jan 21 13:16:26.584: INFO: ExecWithOptions: execute(POST https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces/persistent-local-volumes-expansion-4029/pods/hostexec-i-04e6f5db7ce157579-v6m4x/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=mkfs+-t+ext4+%2Fdev%2Floop0+%26%26+mount+-t+ext4+%2Fdev%2Floop0+%2Ftmp%2Flocal-volume-test-d664a1ac-0d7e-48a2-8f34-edf1bc9023a2+%26%26+chmod+o%2Brwx+%2Ftmp%2Flocal-volume-test-d664a1ac-0d7e-48a2-8f34-edf1bc9023a2&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) �[1mSTEP:�[0m Creating local PVCs and PVs �[38;5;243m01/21/23 13:16:27.545�[0m Jan 21 13:16:27.545: INFO: Creating a PV followed by a PVC Jan 21 13:16:27.781: INFO: Waiting for PV local-pvgzpbk to bind to PVC pvc-5xtdk Jan 21 13:16:27.781: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-5xtdk] to have phase Bound Jan 21 13:16:27.895: INFO: PersistentVolumeClaim pvc-5xtdk found and phase=Bound (113.919023ms) Jan 21 13:16:27.895: INFO: Waiting up to 3m0s for PersistentVolume local-pvgzpbk to have phase Bound Jan 21 13:16:28.008: INFO: PersistentVolume local-pvgzpbk found and phase=Bound (113.234389ms) [It] should support online expansion on node test/e2e/storage/local_volume_resize.go:85 �[1mSTEP:�[0m Creating pod1 �[38;5;243m01/21/23 13:16:28.236�[0m �[1mSTEP:�[0m Creating a pod �[38;5;243m01/21/23 13:16:28.236�[0m Jan 21 13:16:28.354: INFO: Waiting up to 5m0s for pod "pod-8cfbe432-7bb1-418f-88ad-bbe1085c4c34" in namespace "persistent-local-volumes-expansion-4029" to be "running" Jan 21 13:16:28.471: INFO: Pod "pod-8cfbe432-7bb1-418f-88ad-bbe1085c4c34": Phase="Pending", Reason="", readiness=false. Elapsed: 117.225473ms Jan 21 13:16:30.586: INFO: Pod "pod-8cfbe432-7bb1-418f-88ad-bbe1085c4c34": Phase="Pending", Reason="", readiness=false. Elapsed: 2.232370136s Jan 21 13:16:32.595: INFO: Pod "pod-8cfbe432-7bb1-418f-88ad-bbe1085c4c34": Phase="Pending", Reason="", readiness=false. Elapsed: 4.240992497s Jan 21 13:16:34.639: INFO: Pod "pod-8cfbe432-7bb1-418f-88ad-bbe1085c4c34": Phase="Running", Reason="", readiness=true. Elapsed: 6.285128804s Jan 21 13:16:34.639: INFO: Pod "pod-8cfbe432-7bb1-418f-88ad-bbe1085c4c34" satisfied condition "running" Jan 21 13:16:34.871: INFO: pod "pod-8cfbe432-7bb1-418f-88ad-bbe1085c4c34" created on Node "i-04e6f5db7ce157579" �[1mSTEP:�[0m Expanding current pvc �[38;5;243m01/21/23 13:16:34.871�[0m Jan 21 13:16:34.871: INFO: currentPvcSize 2Gi, newSize 2058Mi Jan 21 13:16:35.128: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-d664a1ac-0d7e-48a2-8f34-edf1bc9023a2/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-expansion-4029 PodName:hostexec-i-04e6f5db7ce157579-v6m4x ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jan 21 13:16:35.128: INFO: >>> kubeConfig: /root/.kube/config Jan 21 13:16:35.128: INFO: ExecWithOptions: Clientset creation Jan 21 13:16:35.128: INFO: ExecWithOptions: execute(POST https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces/persistent-local-volumes-expansion-4029/pods/hostexec-i-04e6f5db7ce157579-v6m4x/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=E2E_LOOP_DEV%3D%24%28losetup+%7C+grep+%2Ftmp%2Flocal-volume-test-d664a1ac-0d7e-48a2-8f34-edf1bc9023a2%2Ffile+%7C+awk+%27%7B+print+%241+%7D%27%29+2%3E%261+%3E+%2Fdev%2Fnull+%26%26+echo+%24%7BE2E_LOOP_DEV%7D&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) Jan 21 13:16:36.096: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c dd if=/dev/zero of=/tmp/local-volume-test-d664a1ac-0d7e-48a2-8f34-edf1bc9023a2/file conv=notrunc oflag=append bs=1M count=10 && losetup -c /dev/loop0] Namespace:persistent-local-volumes-expansion-4029 PodName:hostexec-i-04e6f5db7ce157579-v6m4x ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jan 21 13:16:36.096: INFO: >>> kubeConfig: /root/.kube/config Jan 21 13:16:36.097: INFO: ExecWithOptions: Clientset creation Jan 21 13:16:36.097: INFO: ExecWithOptions: execute(POST https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces/persistent-local-volumes-expansion-4029/pods/hostexec-i-04e6f5db7ce157579-v6m4x/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=dd+if%3D%2Fdev%2Fzero+of%3D%2Ftmp%2Flocal-volume-test-d664a1ac-0d7e-48a2-8f34-edf1bc9023a2%2Ffile+conv%3Dnotrunc+oflag%3Dappend+bs%3D1M+count%3D10+%26%26+losetup+-c+%2Fdev%2Floop0&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) �[1mSTEP:�[0m Waiting for file system resize to finish �[38;5;243m01/21/23 13:16:37.387�[0m Jan 21 13:17:05.636: INFO: Unexpected error: while waiting for fs resize to finish: <*errors.errorString | 0xc000ad4930>: { s: "error waiting for pvc \"pvc-5xtdk\" filesystem resize to finish: error fetching pvc \"pvc-5xtdk\" for checking for resize status : Get \"https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces/persistent-local-volumes-expansion-4029/persistentvolumeclaims/pvc-5xtdk\": dial tcp 52.28.228.130:443: connect: connection refused", } Jan 21 13:17:05.636: FAIL: while waiting for fs resize to finish: error waiting for pvc "pvc-5xtdk" filesystem resize to finish: error fetching pvc "pvc-5xtdk" for checking for resize status : Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces/persistent-local-volumes-expansion-4029/persistentvolumeclaims/pvc-5xtdk": dial tcp 52.28.228.130:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/storage.glob..func19.1.3() test/e2e/storage/local_volume_resize.go:123 +0x829 [AfterEach] loopback local block volume test/e2e/storage/local_volume_resize.go:80 �[1mSTEP:�[0m Cleaning up PVC and PV �[38;5;243m01/21/23 13:17:05.637�[0m Jan 21 13:17:05.637: INFO: pvc is nil Jan 21 13:17:05.637: INFO: Deleting PersistentVolume "local-pvgzpbk" Jan 21 13:17:05.760: FAIL: Failed to delete PV and/or PVC: failed to delete PV "local-pvgzpbk": PV Delete API error: Delete "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/persistentvolumes/local-pvgzpbk": dial tcp 52.28.228.130:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/storage.cleanupLocalVolumes(0xc002f79b00, {0xc000b9bf78?, 0x1, 0x0?}) test/e2e/storage/persistent_volumes-local.go:860 +0x2d k8s.io/kubernetes/test/e2e/storage.glob..func19.1.2() test/e2e/storage/local_volume_resize.go:81 +0x47 [AfterEach] [sig-storage] PersistentVolumes-expansion test/e2e/framework/framework.go:187 �[1mSTEP:�[0m Collecting events from namespace "persistent-local-volumes-expansion-4029". �[38;5;243m01/21/23 13:17:05.76�[0m Jan 21 13:17:05.902: INFO: Unexpected error: failed to list events in namespace "persistent-local-volumes-expansion-4029": <*url.Error | 0xc001d05e30>: { Op: "Get", URL: "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces/persistent-local-volumes-expansion-4029/events", Err: <*net.OpError | 0xc002898460>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc00204e450>{ IP: [52, 28, 228, 130], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc00288bf20>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Jan 21 13:17:05.902: FAIL: failed to list events in namespace "persistent-local-volumes-expansion-4029": Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces/persistent-local-volumes-expansion-4029/events": dial tcp 52.28.228.130:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework.dumpEventsInNamespace(0xc003cd7590, {0xc003946a80, 0x27}) test/e2e/framework/util.go:901 +0x191 k8s.io/kubernetes/test/e2e/framework.DumpAllNamespaceInfo({0x7ca63f8, 0xc003f75080}, {0xc003946a80, 0x27}) test/e2e/framework/util.go:919 +0x8d k8s.io/kubernetes/test/e2e/framework.NewFramework.func1(0xc001524000, 0x2?) test/e2e/framework/framework.go:181 +0x8b k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc001524000) test/e2e/framework/framework.go:435 +0x21d �[1mSTEP:�[0m Destroying namespace "persistent-local-volumes-expansion-4029" for this suite. �[38;5;243m01/21/23 13:17:05.903�[0m Jan 21 13:17:06.028: FAIL: Couldn't delete ns: "persistent-local-volumes-expansion-4029": Delete "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces/persistent-local-volumes-expansion-4029": dial tcp 52.28.228.130:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces/persistent-local-volumes-expansion-4029", Err:(*net.OpError)(0xc00436bbd0)}) Full Stack Trace panic({0x6ea5bc0, 0xc003d5ac00}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1() test/e2e/framework/ginkgowrapper/wrapper.go:73 +0x7d panic({0x6ea7de0, 0xc0003749a0}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail({0xc002acfb80, 0x12e}, {0xc003cd7048?, 0x735f76c?, 0xc003cd7068?}) test/e2e/framework/ginkgowrapper/wrapper.go:77 +0x197 k8s.io/kubernetes/test/e2e/framework.Fail({0xc001bd4480, 0x119}, {0xc003cd70e0?, 0xc001df3110?, 0xc003cd7108?}) test/e2e/framework/log.go:63 +0x145 k8s.io/kubernetes/test/e2e/framework.ExpectNoErrorWithOffset(0x1, {0x7c388e0, 0xc001d05e30}, {0xc00288bf60?, 0x0?, 0x0?}) test/e2e/framework/expect.go:76 +0x267 k8s.io/kubernetes/test/e2e/framework.ExpectNoError(...) test/e2e/framework/expect.go:43 k8s.io/kubernetes/test/e2e/framework.dumpEventsInNamespace(0xc003cd7590, {0xc003946a80, 0x27}) test/e2e/framework/util.go:901 +0x191 k8s.io/kubernetes/test/e2e/framework.DumpAllNamespaceInfo({0x7ca63f8, 0xc003f75080}, {0xc003946a80, 0x27}) test/e2e/framework/util.go:919 +0x8d k8s.io/kubernetes/test/e2e/framework.NewFramework.func1(0xc001524000, 0x2?) test/e2e/framework/framework.go:181 +0x8b k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc001524000) test/e2e/framework/framework.go:435 +0x21d
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-storage\]\sPersistentVolumes\-local\s\s\[Volume\stype\:\sblock\]\sTwo\spods\smounting\sa\slocal\svolume\sat\sthe\ssame\stime\sshould\sbe\sable\sto\swrite\sfrom\spod1\sand\sread\sfrom\spod2$'
test/e2e/framework/framework.go:244 k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc0013d8000) test/e2e/framework/framework.go:244 +0x7bffrom junit_01.xml
{"msg":"FAILED [sig-storage] PersistentVolumes-local [Volume type: block] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","completed":9,"skipped":73,"failed":2,"failures":["[sig-storage] PersistentVolumes-local [Volume type: blockfswithformat] One pod requesting one prebound PVC should be able to mount volume and read from pod1","[sig-storage] PersistentVolumes-local [Volume type: block] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2"]} [BeforeEach] [sig-storage] PersistentVolumes-local test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/21/23 13:17:06.85�[0m Jan 21 13:17:06.851: INFO: >>> kubeConfig: /root/.kube/config �[1mSTEP:�[0m Building a namespace api object, basename persistent-local-volumes-test �[38;5;243m01/21/23 13:17:06.852�[0m Jan 21 13:17:06.975: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:09.101: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:11.099: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:13.100: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:15.102: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:17.100: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:19.100: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:21.099: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:23.099: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:25.099: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:42.448: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:42.571: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:42.571: INFO: Unexpected error: <*errors.errorString | 0xc0002378f0>: { s: "timed out waiting for the condition", } Jan 21 13:17:42.571: FAIL: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc0013d8000) test/e2e/framework/framework.go:244 +0x7bf [AfterEach] [sig-storage] PersistentVolumes-local test/e2e/framework/framework.go:187 Jan 21 13:17:42.571: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready Jan 21 13:17:42.709: FAIL: All nodes should be ready after test, Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 52.28.228.130:443: connect: connection refused Full Stack Trace
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-storage\]\sPersistentVolumes\-local\s\s\[Volume\stype\:\sblockfswithformat\]\sOne\spod\srequesting\sone\sprebound\sPVC\sshould\sbe\sable\sto\smount\svolume\sand\sread\sfrom\spod1$'
test/e2e/storage/persistent_volumes-local.go:220 k8s.io/kubernetes/test/e2e/storage.glob..func24.2.3.1() test/e2e/storage/persistent_volumes-local.go:220 +0xd1from junit_01.xml
{"msg":"FAILED [sig-storage] PersistentVolumes-local [Volume type: blockfswithformat] One pod requesting one prebound PVC should be able to mount volume and read from pod1","completed":9,"skipped":58,"failed":1,"failures":["[sig-storage] PersistentVolumes-local [Volume type: blockfswithformat] One pod requesting one prebound PVC should be able to mount volume and read from pod1"]} [BeforeEach] [sig-storage] PersistentVolumes-local test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/21/23 13:16:44.903�[0m Jan 21 13:16:44.903: INFO: >>> kubeConfig: /root/.kube/config �[1mSTEP:�[0m Building a namespace api object, basename persistent-local-volumes-test �[38;5;243m01/21/23 13:16:44.904�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/21/23 13:16:45.25�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/21/23 13:16:45.472�[0m [BeforeEach] [sig-storage] PersistentVolumes-local test/e2e/storage/persistent_volumes-local.go:160 [BeforeEach] [Volume type: blockfswithformat] test/e2e/storage/persistent_volumes-local.go:197 �[1mSTEP:�[0m Initializing test volumes �[38;5;243m01/21/23 13:16:45.947�[0m �[1mSTEP:�[0m Creating block device on node "i-0d8577dd20eb0d9bc" using path "/tmp/local-volume-test-f3ad64c9-e392-4011-aa45-4c606bf68bf6" �[38;5;243m01/21/23 13:16:45.947�[0m Jan 21 13:16:46.074: INFO: Waiting up to 5m0s for pod "hostexec-i-0d8577dd20eb0d9bc-l2nf2" in namespace "persistent-local-volumes-test-5202" to be "running" Jan 21 13:16:46.201: INFO: Pod "hostexec-i-0d8577dd20eb0d9bc-l2nf2": Phase="Pending", Reason="", readiness=false. Elapsed: 127.801154ms Jan 21 13:16:48.315: INFO: Pod "hostexec-i-0d8577dd20eb0d9bc-l2nf2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.241145591s Jan 21 13:16:50.313: INFO: Pod "hostexec-i-0d8577dd20eb0d9bc-l2nf2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.239377344s Jan 21 13:16:52.313: INFO: Pod "hostexec-i-0d8577dd20eb0d9bc-l2nf2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.239636423s Jan 21 13:16:54.315: INFO: Pod "hostexec-i-0d8577dd20eb0d9bc-l2nf2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.241804004s Jan 21 13:16:56.314: INFO: Pod "hostexec-i-0d8577dd20eb0d9bc-l2nf2": Phase="Pending", Reason="", readiness=false. Elapsed: 10.239946734s Jan 21 13:16:58.313: INFO: Pod "hostexec-i-0d8577dd20eb0d9bc-l2nf2": Phase="Pending", Reason="", readiness=false. Elapsed: 12.239408975s Jan 21 13:17:00.314: INFO: Pod "hostexec-i-0d8577dd20eb0d9bc-l2nf2": Phase="Running", Reason="", readiness=true. Elapsed: 14.240002814s Jan 21 13:17:00.314: INFO: Pod "hostexec-i-0d8577dd20eb0d9bc-l2nf2" satisfied condition "running" Jan 21 13:17:00.314: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-f3ad64c9-e392-4011-aa45-4c606bf68bf6 && dd if=/dev/zero of=/tmp/local-volume-test-f3ad64c9-e392-4011-aa45-4c606bf68bf6/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-f3ad64c9-e392-4011-aa45-4c606bf68bf6/file] Namespace:persistent-local-volumes-test-5202 PodName:hostexec-i-0d8577dd20eb0d9bc-l2nf2 ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jan 21 13:17:00.314: INFO: >>> kubeConfig: /root/.kube/config Jan 21 13:17:00.314: INFO: ExecWithOptions: Clientset creation Jan 21 13:17:00.314: INFO: ExecWithOptions: execute(POST https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces/persistent-local-volumes-test-5202/pods/hostexec-i-0d8577dd20eb0d9bc-l2nf2/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=mkdir+-p+%2Ftmp%2Flocal-volume-test-f3ad64c9-e392-4011-aa45-4c606bf68bf6+%26%26+dd+if%3D%2Fdev%2Fzero+of%3D%2Ftmp%2Flocal-volume-test-f3ad64c9-e392-4011-aa45-4c606bf68bf6%2Ffile+bs%3D4096+count%3D5120+%26%26+losetup+-f+%2Ftmp%2Flocal-volume-test-f3ad64c9-e392-4011-aa45-4c606bf68bf6%2Ffile&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) Jan 21 13:17:01.265: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-f3ad64c9-e392-4011-aa45-4c606bf68bf6/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-5202 PodName:hostexec-i-0d8577dd20eb0d9bc-l2nf2 ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jan 21 13:17:01.265: INFO: >>> kubeConfig: /root/.kube/config Jan 21 13:17:01.266: INFO: ExecWithOptions: Clientset creation Jan 21 13:17:01.266: INFO: ExecWithOptions: execute(POST https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces/persistent-local-volumes-test-5202/pods/hostexec-i-0d8577dd20eb0d9bc-l2nf2/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=E2E_LOOP_DEV%3D%24%28losetup+%7C+grep+%2Ftmp%2Flocal-volume-test-f3ad64c9-e392-4011-aa45-4c606bf68bf6%2Ffile+%7C+awk+%27%7B+print+%241+%7D%27%29+2%3E%261+%3E+%2Fdev%2Fnull+%26%26+echo+%24%7BE2E_LOOP_DEV%7D&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) Jan 21 13:17:02.113: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkfs -t ext4 /dev/loop1 && mount -t ext4 /dev/loop1 /tmp/local-volume-test-f3ad64c9-e392-4011-aa45-4c606bf68bf6 && chmod o+rwx /tmp/local-volume-test-f3ad64c9-e392-4011-aa45-4c606bf68bf6] Namespace:persistent-local-volumes-test-5202 PodName:hostexec-i-0d8577dd20eb0d9bc-l2nf2 ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jan 21 13:17:02.113: INFO: >>> kubeConfig: /root/.kube/config Jan 21 13:17:02.114: INFO: ExecWithOptions: Clientset creation Jan 21 13:17:02.114: INFO: ExecWithOptions: execute(POST https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces/persistent-local-volumes-test-5202/pods/hostexec-i-0d8577dd20eb0d9bc-l2nf2/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=mkfs+-t+ext4+%2Fdev%2Floop1+%26%26+mount+-t+ext4+%2Fdev%2Floop1+%2Ftmp%2Flocal-volume-test-f3ad64c9-e392-4011-aa45-4c606bf68bf6+%26%26+chmod+o%2Brwx+%2Ftmp%2Flocal-volume-test-f3ad64c9-e392-4011-aa45-4c606bf68bf6&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) �[1mSTEP:�[0m Creating local PVCs and PVs �[38;5;243m01/21/23 13:17:03.184�[0m Jan 21 13:17:03.184: INFO: Creating a PV followed by a PVC Jan 21 13:17:03.410: INFO: Waiting for PV local-pvwssxz to bind to PVC pvc-pnwq9 Jan 21 13:17:03.410: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-pnwq9] to have phase Bound Jan 21 13:17:03.524: INFO: PersistentVolumeClaim pvc-pnwq9 found and phase=Bound (113.573174ms) Jan 21 13:17:03.524: INFO: Waiting up to 3m0s for PersistentVolume local-pvwssxz to have phase Bound Jan 21 13:17:03.637: INFO: PersistentVolume local-pvwssxz found and phase=Bound (112.717374ms) [BeforeEach] One pod requesting one prebound PVC test/e2e/storage/persistent_volumes-local.go:217 �[1mSTEP:�[0m Creating pod1 �[38;5;243m01/21/23 13:17:03.859�[0m �[1mSTEP:�[0m Creating a pod �[38;5;243m01/21/23 13:17:03.859�[0m Jan 21 13:17:03.973: INFO: Waiting up to 5m0s for pod "pod-97d6375d-3d71-493f-9267-3a1fc7a8d031" in namespace "persistent-local-volumes-test-5202" to be "running" Jan 21 13:17:04.085: INFO: Pod "pod-97d6375d-3d71-493f-9267-3a1fc7a8d031": Phase="Pending", Reason="", readiness=false. Elapsed: 111.728501ms Jan 21 13:17:06.210: INFO: Encountered non-retryable error while getting pod persistent-local-volumes-test-5202/pod-97d6375d-3d71-493f-9267-3a1fc7a8d031: Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces/persistent-local-volumes-test-5202/pods/pod-97d6375d-3d71-493f-9267-3a1fc7a8d031": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:06.210: INFO: Unexpected error: <*errors.errorString | 0xc0010bc020>: { s: "pod \"pod-97d6375d-3d71-493f-9267-3a1fc7a8d031\" is not Running: error while waiting for pod persistent-local-volumes-test-5202/pod-97d6375d-3d71-493f-9267-3a1fc7a8d031 to be running: Get \"https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces/persistent-local-volumes-test-5202/pods/pod-97d6375d-3d71-493f-9267-3a1fc7a8d031\": dial tcp 52.28.228.130:443: connect: connection refused", } Jan 21 13:17:06.210: FAIL: pod "pod-97d6375d-3d71-493f-9267-3a1fc7a8d031" is not Running: error while waiting for pod persistent-local-volumes-test-5202/pod-97d6375d-3d71-493f-9267-3a1fc7a8d031 to be running: Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces/persistent-local-volumes-test-5202/pods/pod-97d6375d-3d71-493f-9267-3a1fc7a8d031": dial tcp 52.28.228.130:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/storage.glob..func24.2.3.1() test/e2e/storage/persistent_volumes-local.go:220 +0xd1 [AfterEach] One pod requesting one prebound PVC test/e2e/storage/persistent_volumes-local.go:229 �[1mSTEP:�[0m Deleting pod1 �[38;5;243m01/21/23 13:17:06.21�[0m �[1mSTEP:�[0m Deleting pod pod-97d6375d-3d71-493f-9267-3a1fc7a8d031 in namespace persistent-local-volumes-test-5202 �[38;5;243m01/21/23 13:17:06.21�[0m Jan 21 13:17:06.337: INFO: Unexpected error occurred: Delete "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces/persistent-local-volumes-test-5202/pods/pod-97d6375d-3d71-493f-9267-3a1fc7a8d031": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:06.337: FAIL: failed to delete pod pod-97d6375d-3d71-493f-9267-3a1fc7a8d031 in namespace persistent-local-volumes-test-5202 Unexpected error: <*url.Error | 0xc001882a80>: { Op: "Delete", URL: "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces/persistent-local-volumes-test-5202/pods/pod-97d6375d-3d71-493f-9267-3a1fc7a8d031", Err: <*net.OpError | 0xc0022b9f90>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc002e0fb30>{ IP: [52, 28, 228, 130], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc002dcd420>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Delete "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces/persistent-local-volumes-test-5202/pods/pod-97d6375d-3d71-493f-9267-3a1fc7a8d031": dial tcp 52.28.228.130:443: connect: connection refused occurred Full Stack Trace k8s.io/kubernetes/test/e2e/framework/pod.DeletePodOrFail({0x7ca63f8, 0xc001614600}, {0xc0063dd0b0, 0x22}, {0xc00237e9c0, 0x28}) test/e2e/framework/pod/delete.go:47 +0x270 k8s.io/kubernetes/test/e2e/storage.glob..func24.2.3.2() test/e2e/storage/persistent_volumes-local.go:231 +0x6a [AfterEach] [Volume type: blockfswithformat] test/e2e/storage/persistent_volumes-local.go:206 �[1mSTEP:�[0m Cleaning up PVC and PV �[38;5;243m01/21/23 13:17:06.338�[0m Jan 21 13:17:06.338: INFO: Deleting PersistentVolumeClaim "pvc-pnwq9" Jan 21 13:17:06.464: INFO: Deleting PersistentVolume "local-pvwssxz" Jan 21 13:17:06.589: FAIL: Failed to delete PV and/or PVC: [failed to delete PVC "pvc-pnwq9": PVC Delete API error: Delete "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces/persistent-local-volumes-test-5202/persistentvolumeclaims/pvc-pnwq9": dial tcp 52.28.228.130:443: connect: connection refused, failed to delete PV "local-pvwssxz": PV Delete API error: Delete "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/persistentvolumes/local-pvwssxz": dial tcp 52.28.228.130:443: connect: connection refused] Full Stack Trace k8s.io/kubernetes/test/e2e/storage.cleanupLocalVolumes(0xc0019aea20, {0xc002e3bf78?, 0x1, 0xc0022b9a40?}) test/e2e/storage/persistent_volumes-local.go:860 +0x2d k8s.io/kubernetes/test/e2e/storage.glob..func24.2.2() test/e2e/storage/persistent_volumes-local.go:207 +0x47 [AfterEach] [sig-storage] PersistentVolumes-local test/e2e/framework/framework.go:187 �[1mSTEP:�[0m Collecting events from namespace "persistent-local-volumes-test-5202". �[38;5;243m01/21/23 13:17:06.59�[0m Jan 21 13:17:06.711: INFO: Unexpected error: failed to list events in namespace "persistent-local-volumes-test-5202": <*url.Error | 0xc001a3b200>: { Op: "Get", URL: "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces/persistent-local-volumes-test-5202/events", Err: <*net.OpError | 0xc001243b80>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc001883cb0>{ IP: [52, 28, 228, 130], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc002ca96c0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Jan 21 13:17:06.711: FAIL: failed to list events in namespace "persistent-local-volumes-test-5202": Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces/persistent-local-volumes-test-5202/events": dial tcp 52.28.228.130:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework.dumpEventsInNamespace(0xc001c07590, {0xc0063dd0b0, 0x22}) test/e2e/framework/util.go:901 +0x191 k8s.io/kubernetes/test/e2e/framework.DumpAllNamespaceInfo({0x7ca63f8, 0xc001614600}, {0xc0063dd0b0, 0x22}) test/e2e/framework/util.go:919 +0x8d k8s.io/kubernetes/test/e2e/framework.NewFramework.func1(0xc0013d8000, 0x3?) test/e2e/framework/framework.go:181 +0x8b k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc0013d8000) test/e2e/framework/framework.go:435 +0x21d �[1mSTEP:�[0m Destroying namespace "persistent-local-volumes-test-5202" for this suite. �[38;5;243m01/21/23 13:17:06.712�[0m Jan 21 13:17:06.833: FAIL: Couldn't delete ns: "persistent-local-volumes-test-5202": Delete "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces/persistent-local-volumes-test-5202": dial tcp 52.28.228.130:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces/persistent-local-volumes-test-5202", Err:(*net.OpError)(0xc001243e50)}) Full Stack Trace panic({0x6ea5bc0, 0xc002245e40}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1() test/e2e/framework/ginkgowrapper/wrapper.go:73 +0x7d panic({0x6ea7de0, 0xc00069b260}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail({0xc002e93180, 0x124}, {0xc001c07048?, 0x735f76c?, 0xc001c07068?}) test/e2e/framework/ginkgowrapper/wrapper.go:77 +0x197 k8s.io/kubernetes/test/e2e/framework.Fail({0xc001774c60, 0x10f}, {0xc001c070e0?, 0xc001bd2270?, 0xc001c07108?}) test/e2e/framework/log.go:63 +0x145 k8s.io/kubernetes/test/e2e/framework.ExpectNoErrorWithOffset(0x1, {0x7c388e0, 0xc001a3b200}, {0xc002ca9700?, 0x0?, 0x0?}) test/e2e/framework/expect.go:76 +0x267 k8s.io/kubernetes/test/e2e/framework.ExpectNoError(...) test/e2e/framework/expect.go:43 k8s.io/kubernetes/test/e2e/framework.dumpEventsInNamespace(0xc001c07590, {0xc0063dd0b0, 0x22}) test/e2e/framework/util.go:901 +0x191 k8s.io/kubernetes/test/e2e/framework.DumpAllNamespaceInfo({0x7ca63f8, 0xc001614600}, {0xc0063dd0b0, 0x22}) test/e2e/framework/util.go:919 +0x8d k8s.io/kubernetes/test/e2e/framework.NewFramework.func1(0xc0013d8000, 0x3?) test/e2e/framework/framework.go:181 +0x8b k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc0013d8000) test/e2e/framework/framework.go:435 +0x21d
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-storage\]\sPersistentVolumes\-local\s\s\[Volume\stype\:\sblockfswithoutformat\]\sTwo\spods\smounting\sa\slocal\svolume\sat\sthe\ssame\stime\sshould\sbe\sable\sto\swrite\sfrom\spod1\sand\sread\sfrom\spod2$'
test/e2e/storage/persistent_volumes-local.go:738 k8s.io/kubernetes/test/e2e/storage.twoPodsReadWriteTest(0xc0006dd980?, 0xc003112c60, 0xc003438c90) test/e2e/storage/persistent_volumes-local.go:738 +0x73 k8s.io/kubernetes/test/e2e/storage.glob..func24.2.4.1() test/e2e/storage/persistent_volumes-local.go:252 +0x2bfrom junit_01.xml
{"msg":"FAILED [sig-storage] PersistentVolumes-local [Volume type: blockfswithoutformat] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","completed":9,"skipped":79,"failed":1,"failures":["[sig-storage] PersistentVolumes-local [Volume type: blockfswithoutformat] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2"]} [BeforeEach] [sig-storage] PersistentVolumes-local test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/21/23 13:16:43.225�[0m Jan 21 13:16:43.225: INFO: >>> kubeConfig: /root/.kube/config �[1mSTEP:�[0m Building a namespace api object, basename persistent-local-volumes-test �[38;5;243m01/21/23 13:16:43.226�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/21/23 13:16:43.57�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/21/23 13:16:43.792�[0m [BeforeEach] [sig-storage] PersistentVolumes-local test/e2e/storage/persistent_volumes-local.go:160 [BeforeEach] [Volume type: blockfswithoutformat] test/e2e/storage/persistent_volumes-local.go:197 �[1mSTEP:�[0m Initializing test volumes �[38;5;243m01/21/23 13:16:44.242�[0m �[1mSTEP:�[0m Creating block device on node "i-0d8577dd20eb0d9bc" using path "/tmp/local-volume-test-1ec80281-5cb1-4e3e-970f-7eb4ae3ad1c6" �[38;5;243m01/21/23 13:16:44.242�[0m Jan 21 13:16:44.359: INFO: Waiting up to 5m0s for pod "hostexec-i-0d8577dd20eb0d9bc-ccc2d" in namespace "persistent-local-volumes-test-2901" to be "running" Jan 21 13:16:44.471: INFO: Pod "hostexec-i-0d8577dd20eb0d9bc-ccc2d": Phase="Pending", Reason="", readiness=false. Elapsed: 111.392077ms Jan 21 13:16:46.585: INFO: Pod "hostexec-i-0d8577dd20eb0d9bc-ccc2d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.225162304s Jan 21 13:16:48.583: INFO: Pod "hostexec-i-0d8577dd20eb0d9bc-ccc2d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.22370107s Jan 21 13:16:50.582: INFO: Pod "hostexec-i-0d8577dd20eb0d9bc-ccc2d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.222950082s Jan 21 13:16:52.589: INFO: Pod "hostexec-i-0d8577dd20eb0d9bc-ccc2d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.230001999s Jan 21 13:16:54.587: INFO: Pod "hostexec-i-0d8577dd20eb0d9bc-ccc2d": Phase="Pending", Reason="", readiness=false. Elapsed: 10.227471766s Jan 21 13:16:56.583: INFO: Pod "hostexec-i-0d8577dd20eb0d9bc-ccc2d": Phase="Pending", Reason="", readiness=false. Elapsed: 12.223617843s Jan 21 13:16:58.583: INFO: Pod "hostexec-i-0d8577dd20eb0d9bc-ccc2d": Phase="Running", Reason="", readiness=true. Elapsed: 14.223971965s Jan 21 13:16:58.584: INFO: Pod "hostexec-i-0d8577dd20eb0d9bc-ccc2d" satisfied condition "running" Jan 21 13:16:58.584: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-1ec80281-5cb1-4e3e-970f-7eb4ae3ad1c6 && dd if=/dev/zero of=/tmp/local-volume-test-1ec80281-5cb1-4e3e-970f-7eb4ae3ad1c6/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-1ec80281-5cb1-4e3e-970f-7eb4ae3ad1c6/file] Namespace:persistent-local-volumes-test-2901 PodName:hostexec-i-0d8577dd20eb0d9bc-ccc2d ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jan 21 13:16:58.584: INFO: >>> kubeConfig: /root/.kube/config Jan 21 13:16:58.585: INFO: ExecWithOptions: Clientset creation Jan 21 13:16:58.585: INFO: ExecWithOptions: execute(POST https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces/persistent-local-volumes-test-2901/pods/hostexec-i-0d8577dd20eb0d9bc-ccc2d/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=mkdir+-p+%2Ftmp%2Flocal-volume-test-1ec80281-5cb1-4e3e-970f-7eb4ae3ad1c6+%26%26+dd+if%3D%2Fdev%2Fzero+of%3D%2Ftmp%2Flocal-volume-test-1ec80281-5cb1-4e3e-970f-7eb4ae3ad1c6%2Ffile+bs%3D4096+count%3D5120+%26%26+losetup+-f+%2Ftmp%2Flocal-volume-test-1ec80281-5cb1-4e3e-970f-7eb4ae3ad1c6%2Ffile&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) Jan 21 13:16:59.529: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-1ec80281-5cb1-4e3e-970f-7eb4ae3ad1c6/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:persistent-local-volumes-test-2901 PodName:hostexec-i-0d8577dd20eb0d9bc-ccc2d ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jan 21 13:16:59.529: INFO: >>> kubeConfig: /root/.kube/config Jan 21 13:16:59.530: INFO: ExecWithOptions: Clientset creation Jan 21 13:16:59.530: INFO: ExecWithOptions: execute(POST https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces/persistent-local-volumes-test-2901/pods/hostexec-i-0d8577dd20eb0d9bc-ccc2d/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=E2E_LOOP_DEV%3D%24%28losetup+%7C+grep+%2Ftmp%2Flocal-volume-test-1ec80281-5cb1-4e3e-970f-7eb4ae3ad1c6%2Ffile+%7C+awk+%27%7B+print+%241+%7D%27%29+2%3E%261+%3E+%2Fdev%2Fnull+%26%26+echo+%24%7BE2E_LOOP_DEV%7D&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) �[1mSTEP:�[0m Creating local PVCs and PVs �[38;5;243m01/21/23 13:17:00.369�[0m Jan 21 13:17:00.369: INFO: Creating a PV followed by a PVC Jan 21 13:17:00.609: INFO: Waiting for PV local-pvw4q5c to bind to PVC pvc-t98gs Jan 21 13:17:00.609: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-t98gs] to have phase Bound Jan 21 13:17:00.721: INFO: PersistentVolumeClaim pvc-t98gs found and phase=Bound (111.481038ms) Jan 21 13:17:00.721: INFO: Waiting up to 3m0s for PersistentVolume local-pvw4q5c to have phase Bound Jan 21 13:17:00.834: INFO: PersistentVolume local-pvw4q5c found and phase=Bound (113.009267ms) [It] should be able to write from pod1 and read from pod2 test/e2e/storage/persistent_volumes-local.go:251 �[1mSTEP:�[0m Creating pod1 to write to the PV �[38;5;243m01/21/23 13:17:01.058�[0m �[1mSTEP:�[0m Creating a pod �[38;5;243m01/21/23 13:17:01.058�[0m Jan 21 13:17:01.173: INFO: Waiting up to 5m0s for pod "pod-1617b7ac-3c1b-4135-8d46-7d93adb7ffc2" in namespace "persistent-local-volumes-test-2901" to be "running" Jan 21 13:17:01.284: INFO: Pod "pod-1617b7ac-3c1b-4135-8d46-7d93adb7ffc2": Phase="Pending", Reason="", readiness=false. Elapsed: 111.554372ms Jan 21 13:17:03.396: INFO: Pod "pod-1617b7ac-3c1b-4135-8d46-7d93adb7ffc2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.223098538s Jan 21 13:17:05.408: INFO: Encountered non-retryable error while getting pod persistent-local-volumes-test-2901/pod-1617b7ac-3c1b-4135-8d46-7d93adb7ffc2: Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces/persistent-local-volumes-test-2901/pods/pod-1617b7ac-3c1b-4135-8d46-7d93adb7ffc2": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:05.408: INFO: Unexpected error: <*errors.errorString | 0xc0005bbd50>: { s: "pod \"pod-1617b7ac-3c1b-4135-8d46-7d93adb7ffc2\" is not Running: error while waiting for pod persistent-local-volumes-test-2901/pod-1617b7ac-3c1b-4135-8d46-7d93adb7ffc2 to be running: Get \"https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces/persistent-local-volumes-test-2901/pods/pod-1617b7ac-3c1b-4135-8d46-7d93adb7ffc2\": dial tcp 52.28.228.130:443: connect: connection refused", } Jan 21 13:17:05.408: FAIL: pod "pod-1617b7ac-3c1b-4135-8d46-7d93adb7ffc2" is not Running: error while waiting for pod persistent-local-volumes-test-2901/pod-1617b7ac-3c1b-4135-8d46-7d93adb7ffc2 to be running: Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces/persistent-local-volumes-test-2901/pods/pod-1617b7ac-3c1b-4135-8d46-7d93adb7ffc2": dial tcp 52.28.228.130:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/storage.twoPodsReadWriteTest(0xc0006dd980?, 0xc003112c60, 0xc003438c90) test/e2e/storage/persistent_volumes-local.go:738 +0x73 k8s.io/kubernetes/test/e2e/storage.glob..func24.2.4.1() test/e2e/storage/persistent_volumes-local.go:252 +0x2b [AfterEach] [Volume type: blockfswithoutformat] test/e2e/storage/persistent_volumes-local.go:206 �[1mSTEP:�[0m Cleaning up PVC and PV �[38;5;243m01/21/23 13:17:05.409�[0m Jan 21 13:17:05.409: INFO: Deleting PersistentVolumeClaim "pvc-t98gs" Jan 21 13:17:05.533: INFO: Deleting PersistentVolume "local-pvw4q5c" Jan 21 13:17:05.656: FAIL: Failed to delete PV and/or PVC: [failed to delete PVC "pvc-t98gs": PVC Delete API error: Delete "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces/persistent-local-volumes-test-2901/persistentvolumeclaims/pvc-t98gs": dial tcp 52.28.228.130:443: connect: connection refused, failed to delete PV "local-pvw4q5c": PV Delete API error: Delete "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/persistentvolumes/local-pvw4q5c": dial tcp 52.28.228.130:443: connect: connection refused] Full Stack Trace k8s.io/kubernetes/test/e2e/storage.cleanupLocalVolumes(0xc003112c60, {0xc002987f78?, 0x1, 0x0?}) test/e2e/storage/persistent_volumes-local.go:860 +0x2d k8s.io/kubernetes/test/e2e/storage.glob..func24.2.2() test/e2e/storage/persistent_volumes-local.go:207 +0x47 [AfterEach] [sig-storage] PersistentVolumes-local test/e2e/framework/framework.go:187 �[1mSTEP:�[0m Collecting events from namespace "persistent-local-volumes-test-2901". �[38;5;243m01/21/23 13:17:05.656�[0m Jan 21 13:17:05.780: INFO: Unexpected error: failed to list events in namespace "persistent-local-volumes-test-2901": <*url.Error | 0xc0035a15f0>: { Op: "Get", URL: "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces/persistent-local-volumes-test-2901/events", Err: <*net.OpError | 0xc003144190>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0032eb1d0>{ IP: [52, 28, 228, 130], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc0035e6900>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Jan 21 13:17:05.780: FAIL: failed to list events in namespace "persistent-local-volumes-test-2901": Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces/persistent-local-volumes-test-2901/events": dial tcp 52.28.228.130:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework.dumpEventsInNamespace(0xc002e6d590, {0xc00319c2d0, 0x22}) test/e2e/framework/util.go:901 +0x191 k8s.io/kubernetes/test/e2e/framework.DumpAllNamespaceInfo({0x7ca63f8, 0xc003051200}, {0xc00319c2d0, 0x22}) test/e2e/framework/util.go:919 +0x8d k8s.io/kubernetes/test/e2e/framework.NewFramework.func1(0xc00140d8c0, 0x3?) test/e2e/framework/framework.go:181 +0x8b k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc00140d8c0) test/e2e/framework/framework.go:435 +0x21d �[1mSTEP:�[0m Destroying namespace "persistent-local-volumes-test-2901" for this suite. �[38;5;243m01/21/23 13:17:05.78�[0m Jan 21 13:17:05.903: FAIL: Couldn't delete ns: "persistent-local-volumes-test-2901": Delete "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces/persistent-local-volumes-test-2901": dial tcp 52.28.228.130:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces/persistent-local-volumes-test-2901", Err:(*net.OpError)(0xc002fd0a00)}) Full Stack Trace panic({0x6ea5bc0, 0xc002ff6580}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1() test/e2e/framework/ginkgowrapper/wrapper.go:73 +0x7d panic({0x6ea7de0, 0xc0008dddc0}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail({0xc002b09180, 0x124}, {0xc002e6d048?, 0x735f76c?, 0xc002e6d068?}) test/e2e/framework/ginkgowrapper/wrapper.go:77 +0x197 k8s.io/kubernetes/test/e2e/framework.Fail({0xc003049560, 0x10f}, {0xc002e6d0e0?, 0xc0035acf70?, 0xc002e6d108?}) test/e2e/framework/log.go:63 +0x145 k8s.io/kubernetes/test/e2e/framework.ExpectNoErrorWithOffset(0x1, {0x7c388e0, 0xc0035a15f0}, {0xc0035e6940?, 0x0?, 0x0?}) test/e2e/framework/expect.go:76 +0x267 k8s.io/kubernetes/test/e2e/framework.ExpectNoError(...) test/e2e/framework/expect.go:43 k8s.io/kubernetes/test/e2e/framework.dumpEventsInNamespace(0xc002e6d590, {0xc00319c2d0, 0x22}) test/e2e/framework/util.go:901 +0x191 k8s.io/kubernetes/test/e2e/framework.DumpAllNamespaceInfo({0x7ca63f8, 0xc003051200}, {0xc00319c2d0, 0x22}) test/e2e/framework/util.go:919 +0x8d k8s.io/kubernetes/test/e2e/framework.NewFramework.func1(0xc00140d8c0, 0x3?) test/e2e/framework/framework.go:181 +0x8b k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc00140d8c0) test/e2e/framework/framework.go:435 +0x21d
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-storage\]\sPersistentVolumes\-local\s\s\[Volume\stype\:\sdir\-bindmounted\]\sOne\spod\srequesting\sone\sprebound\sPVC\sshould\sbe\sable\sto\smount\svolume\sand\sread\sfrom\spod1$'
test/e2e/storage/persistent_volumes-local.go:868 k8s.io/kubernetes/test/e2e/storage.verifyLocalVolume(0x736afba?, 0x0?) test/e2e/storage/persistent_volumes-local.go:868 +0x51 k8s.io/kubernetes/test/e2e/storage.createLocalPVCsPVs(0xc0019e1a70, {0xc0042fa3e0, 0x1, 0x1}, {0x736ca9e, 0x9}) test/e2e/storage/persistent_volumes-local.go:947 +0x4cc k8s.io/kubernetes/test/e2e/storage.setupLocalVolumesPVCsPVs(0xc0019e1a70?, {0x738696d, 0xf}, 0x0?, 0x0?, {0x736ca9e, 0x9}) test/e2e/storage/persistent_volumes-local.go:1107 +0xce k8s.io/kubernetes/test/e2e/storage.glob..func24.2.1() test/e2e/storage/persistent_volumes-local.go:202 +0xd7from junit_01.xml
{"msg":"FAILED [sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] One pod requesting one prebound PVC should be able to mount volume and read from pod1","completed":11,"skipped":87,"failed":1,"failures":["[sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] One pod requesting one prebound PVC should be able to mount volume and read from pod1"]} [BeforeEach] [sig-storage] PersistentVolumes-local test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/21/23 13:16:50.067�[0m Jan 21 13:16:50.067: INFO: >>> kubeConfig: /root/.kube/config �[1mSTEP:�[0m Building a namespace api object, basename persistent-local-volumes-test �[38;5;243m01/21/23 13:16:50.068�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/21/23 13:16:50.415�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/21/23 13:16:50.646�[0m [BeforeEach] [sig-storage] PersistentVolumes-local test/e2e/storage/persistent_volumes-local.go:160 [BeforeEach] [Volume type: dir-bindmounted] test/e2e/storage/persistent_volumes-local.go:197 �[1mSTEP:�[0m Initializing test volumes �[38;5;243m01/21/23 13:16:51.095�[0m Jan 21 13:16:51.212: INFO: Waiting up to 5m0s for pod "hostexec-i-0d8577dd20eb0d9bc-hqpz4" in namespace "persistent-local-volumes-test-4734" to be "running" Jan 21 13:16:51.324: INFO: Pod "hostexec-i-0d8577dd20eb0d9bc-hqpz4": Phase="Pending", Reason="", readiness=false. Elapsed: 112.113825ms Jan 21 13:16:53.446: INFO: Pod "hostexec-i-0d8577dd20eb0d9bc-hqpz4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.234461963s Jan 21 13:16:55.436: INFO: Pod "hostexec-i-0d8577dd20eb0d9bc-hqpz4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.224137522s Jan 21 13:16:57.436: INFO: Pod "hostexec-i-0d8577dd20eb0d9bc-hqpz4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.224334329s Jan 21 13:16:59.436: INFO: Pod "hostexec-i-0d8577dd20eb0d9bc-hqpz4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.224398894s Jan 21 13:17:01.436: INFO: Pod "hostexec-i-0d8577dd20eb0d9bc-hqpz4": Phase="Pending", Reason="", readiness=false. Elapsed: 10.22446357s Jan 21 13:17:03.436: INFO: Pod "hostexec-i-0d8577dd20eb0d9bc-hqpz4": Phase="Running", Reason="", readiness=true. Elapsed: 12.224519989s Jan 21 13:17:03.436: INFO: Pod "hostexec-i-0d8577dd20eb0d9bc-hqpz4" satisfied condition "running" Jan 21 13:17:03.436: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-volume-test-24349ff8-f032-4804-b06c-44bb8af6693f && mount --bind /tmp/local-volume-test-24349ff8-f032-4804-b06c-44bb8af6693f /tmp/local-volume-test-24349ff8-f032-4804-b06c-44bb8af6693f] Namespace:persistent-local-volumes-test-4734 PodName:hostexec-i-0d8577dd20eb0d9bc-hqpz4 ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jan 21 13:17:03.436: INFO: >>> kubeConfig: /root/.kube/config Jan 21 13:17:03.437: INFO: ExecWithOptions: Clientset creation Jan 21 13:17:03.437: INFO: ExecWithOptions: execute(POST https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces/persistent-local-volumes-test-4734/pods/hostexec-i-0d8577dd20eb0d9bc-hqpz4/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=mkdir+%2Ftmp%2Flocal-volume-test-24349ff8-f032-4804-b06c-44bb8af6693f+%26%26+mount+--bind+%2Ftmp%2Flocal-volume-test-24349ff8-f032-4804-b06c-44bb8af6693f+%2Ftmp%2Flocal-volume-test-24349ff8-f032-4804-b06c-44bb8af6693f&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) �[1mSTEP:�[0m Creating local PVCs and PVs �[38;5;243m01/21/23 13:17:04.298�[0m Jan 21 13:17:04.298: INFO: Creating a PV followed by a PVC Jan 21 13:17:04.525: INFO: Waiting for PV local-pv92q8z to bind to PVC pvc-r7mpc Jan 21 13:17:04.525: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-r7mpc] to have phase Bound Jan 21 13:17:04.638: INFO: PersistentVolumeClaim pvc-r7mpc found and phase=Bound (113.323041ms) Jan 21 13:17:04.638: INFO: Waiting up to 3m0s for PersistentVolume local-pv92q8z to have phase Bound Jan 21 13:17:04.750: INFO: PersistentVolume local-pv92q8z found and phase=Bound (111.723736ms) Jan 21 13:17:06.117: INFO: Unexpected error: <*errors.errorString | 0xc002c55510>: { s: "PVC Get API error: Get \"https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces/persistent-local-volumes-test-4734/persistentvolumeclaims/pvc-r7mpc\": dial tcp 52.28.228.130:443: connect: connection refused", } Jan 21 13:17:06.117: FAIL: PVC Get API error: Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces/persistent-local-volumes-test-4734/persistentvolumeclaims/pvc-r7mpc": dial tcp 52.28.228.130:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/storage.verifyLocalVolume(0x736afba?, 0x0?) test/e2e/storage/persistent_volumes-local.go:868 +0x51 k8s.io/kubernetes/test/e2e/storage.createLocalPVCsPVs(0xc0019e1a70, {0xc0042fa3e0, 0x1, 0x1}, {0x736ca9e, 0x9}) test/e2e/storage/persistent_volumes-local.go:947 +0x4cc k8s.io/kubernetes/test/e2e/storage.setupLocalVolumesPVCsPVs(0xc0019e1a70?, {0x738696d, 0xf}, 0x0?, 0x0?, {0x736ca9e, 0x9}) test/e2e/storage/persistent_volumes-local.go:1107 +0xce k8s.io/kubernetes/test/e2e/storage.glob..func24.2.1() test/e2e/storage/persistent_volumes-local.go:202 +0xd7 [AfterEach] [Volume type: dir-bindmounted] test/e2e/storage/persistent_volumes-local.go:206 �[1mSTEP:�[0m Cleaning up PVC and PV �[38;5;243m01/21/23 13:17:06.118�[0m [AfterEach] [sig-storage] PersistentVolumes-local test/e2e/framework/framework.go:187 �[1mSTEP:�[0m Collecting events from namespace "persistent-local-volumes-test-4734". �[38;5;243m01/21/23 13:17:06.118�[0m Jan 21 13:17:06.240: INFO: Unexpected error: failed to list events in namespace "persistent-local-volumes-test-4734": <*url.Error | 0xc00337bec0>: { Op: "Get", URL: "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces/persistent-local-volumes-test-4734/events", Err: <*net.OpError | 0xc0036dd810>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0033cc9c0>{ IP: [52, 28, 228, 130], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc003142ec0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Jan 21 13:17:06.240: FAIL: failed to list events in namespace "persistent-local-volumes-test-4734": Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces/persistent-local-volumes-test-4734/events": dial tcp 52.28.228.130:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework.dumpEventsInNamespace(0xc00511b590, {0xc002bdd7a0, 0x22}) test/e2e/framework/util.go:901 +0x191 k8s.io/kubernetes/test/e2e/framework.DumpAllNamespaceInfo({0x7ca63f8, 0xc0004d7380}, {0xc002bdd7a0, 0x22}) test/e2e/framework/util.go:919 +0x8d k8s.io/kubernetes/test/e2e/framework.NewFramework.func1(0xc0014ab080, 0x3?) test/e2e/framework/framework.go:181 +0x8b k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc0014ab080) test/e2e/framework/framework.go:435 +0x21d �[1mSTEP:�[0m Destroying namespace "persistent-local-volumes-test-4734" for this suite. �[38;5;243m01/21/23 13:17:06.24�[0m Jan 21 13:17:06.365: FAIL: Couldn't delete ns: "persistent-local-volumes-test-4734": Delete "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces/persistent-local-volumes-test-4734": dial tcp 52.28.228.130:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces/persistent-local-volumes-test-4734", Err:(*net.OpError)(0xc0029cfae0)}) Full Stack Trace panic({0x6ea5bc0, 0xc0016ed540}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1() test/e2e/framework/ginkgowrapper/wrapper.go:73 +0x7d panic({0x6ea7de0, 0xc00077d960}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail({0xc003da8000, 0x124}, {0xc00511b048?, 0x735f76c?, 0xc00511b068?}) test/e2e/framework/ginkgowrapper/wrapper.go:77 +0x197 k8s.io/kubernetes/test/e2e/framework.Fail({0xc00224fd40, 0x10f}, {0xc00511b0e0?, 0xc00463bd40?, 0xc00511b108?}) test/e2e/framework/log.go:63 +0x145 k8s.io/kubernetes/test/e2e/framework.ExpectNoErrorWithOffset(0x1, {0x7c388e0, 0xc00337bec0}, {0xc003142f00?, 0x0?, 0x0?}) test/e2e/framework/expect.go:76 +0x267 k8s.io/kubernetes/test/e2e/framework.ExpectNoError(...) test/e2e/framework/expect.go:43 k8s.io/kubernetes/test/e2e/framework.dumpEventsInNamespace(0xc00511b590, {0xc002bdd7a0, 0x22}) test/e2e/framework/util.go:901 +0x191 k8s.io/kubernetes/test/e2e/framework.DumpAllNamespaceInfo({0x7ca63f8, 0xc0004d7380}, {0xc002bdd7a0, 0x22}) test/e2e/framework/util.go:919 +0x8d k8s.io/kubernetes/test/e2e/framework.NewFramework.func1(0xc0014ab080, 0x3?) test/e2e/framework/framework.go:181 +0x8b k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc0014ab080) test/e2e/framework/framework.go:435 +0x21d
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-storage\]\sSecrets\sshould\sbe\sconsumable\sfrom\spods\sin\svolume\swith\smappings\s\[NodeConformance\]\s\[Conformance\]$'
test/e2e/framework/framework.go:244 k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc0008de420) test/e2e/framework/framework.go:244 +0x7bffrom junit_01.xml
{"msg":"FAILED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","completed":11,"skipped":88,"failed":2,"failures":["[sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] One pod requesting one prebound PVC should be able to mount volume and read from pod1","[sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]"]} [BeforeEach] [sig-storage] Secrets test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/21/23 13:17:06.369�[0m Jan 21 13:17:06.369: INFO: >>> kubeConfig: /root/.kube/config �[1mSTEP:�[0m Building a namespace api object, basename secrets �[38;5;243m01/21/23 13:17:06.37�[0m Jan 21 13:17:06.496: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:08.618: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:10.623: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:12.621: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:14.621: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:16.618: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:18.621: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:20.622: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:22.622: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:24.620: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:41.932: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:42.063: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:42.063: INFO: Unexpected error: <*errors.errorString | 0xc000285b50>: { s: "timed out waiting for the condition", } Jan 21 13:17:42.063: FAIL: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc0008de420) test/e2e/framework/framework.go:244 +0x7bf [AfterEach] [sig-storage] Secrets test/e2e/framework/framework.go:187 Jan 21 13:17:42.063: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready Jan 21 13:17:42.215: FAIL: All nodes should be ready after test, Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 52.28.228.130:443: connect: connection refused Full Stack Trace
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-storage\]\sSubpath\sAtomic\swriter\svolumes\sshould\ssupport\ssubpaths\swith\sconfigmap\spod\s\[Conformance\]$'
test/e2e/framework/framework.go:244 k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc0013d7e40) test/e2e/framework/framework.go:244 +0x7bffrom junit_01.xml
{"msg":"FAILED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [Conformance]","completed":8,"skipped":108,"failed":2,"failures":["[sig-network] Services should be able to up and down services","[sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [Conformance]"]} [BeforeEach] [sig-storage] Subpath test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/21/23 13:17:07.316�[0m Jan 21 13:17:07.316: INFO: >>> kubeConfig: /root/.kube/config �[1mSTEP:�[0m Building a namespace api object, basename subpath �[38;5;243m01/21/23 13:17:07.317�[0m Jan 21 13:17:07.441: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:09.565: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:11.565: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:13.566: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:15.564: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:17.565: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:19.580: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:21.566: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:23.568: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:25.574: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:42.957: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:43.093: INFO: Unexpected error while creating namespace: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 52.28.228.130:443: connect: connection refused Jan 21 13:17:43.093: INFO: Unexpected error: <*errors.errorString | 0xc0001eb920>: { s: "timed out waiting for the condition", } Jan 21 13:17:43.093: FAIL: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).BeforeEach(0xc0013d7e40) test/e2e/framework/framework.go:244 +0x7bf [AfterEach] [sig-storage] Subpath test/e2e/framework/framework.go:187 Jan 21 13:17:43.094: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready Jan 21 13:17:43.220: FAIL: All nodes should be ready after test, Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 52.28.228.130:443: connect: connection refused Full Stack Trace
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-storage\]\sSubpath\sContainer\srestart\sshould\sverify\sthat\scontainer\scan\srestart\ssuccessfully\safter\sconfigmaps\smodified$'
test/e2e/storage/testsuites/subpath.go:878 k8s.io/kubernetes/test/e2e/storage/testsuites.testPodContainerRestartWithHooks(0xc000f662c0, 0xc0016b9400, 0xc000259dd0) test/e2e/storage/testsuites/subpath.go:878 +0x7fd k8s.io/kubernetes/test/e2e/storage/testsuites.TestPodContainerRestartWithConfigmapModified(0xc000f662c0, 0xc00354db00, 0xc00354dd40) test/e2e/storage/testsuites/subpath.go:937 +0x4c5 k8s.io/kubernetes/test/e2e/storage.glob..func29.2.1() test/e2e/storage/subpath.go:126 +0x179from junit_01.xml
{"msg":"FAILED [sig-storage] Subpath Container restart should verify that container can restart successfully after configmaps modified","completed":10,"skipped":61,"failed":1,"failures":["[sig-storage] Subpath Container restart should verify that container can restart successfully after configmaps modified"]} [BeforeEach] [sig-storage] Subpath test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/21/23 13:16:23.131�[0m Jan 21 13:16:23.131: INFO: >>> kubeConfig: /root/.kube/config �[1mSTEP:�[0m Building a namespace api object, basename subpath �[38;5;243m01/21/23 13:16:23.132�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/21/23 13:16:23.473�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/21/23 13:16:23.697�[0m [It] should verify that container can restart successfully after configmaps modified test/e2e/storage/subpath.go:123 �[1mSTEP:�[0m Create configmap �[38;5;243m01/21/23 13:16:23.92�[0m �[1mSTEP:�[0m Creating pod pod-subpath-test-configmap-pmlq �[38;5;243m01/21/23 13:16:24.05�[0m Jan 21 13:16:24.176: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-pmlq" in namespace "subpath-7660" to be "running" Jan 21 13:16:24.290: INFO: Pod "pod-subpath-test-configmap-pmlq": Phase="Pending", Reason="", readiness=false. Elapsed: 113.812926ms Jan 21 13:16:26.411: INFO: Pod "pod-subpath-test-configmap-pmlq": Phase="Running", Reason="", readiness=true. Elapsed: 2.235066555s Jan 21 13:16:26.411: INFO: Pod "pod-subpath-test-configmap-pmlq" satisfied condition "running" �[1mSTEP:�[0m Failing liveness probe �[38;5;243m01/21/23 13:16:26.411�[0m Jan 21 13:16:26.411: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/ec436c25-998b-11ed-a697-56ea552f9d82/kubectl --server=https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=subpath-7660 exec pod-subpath-test-configmap-pmlq --container test-container-volume-configmap-pmlq -- /bin/sh -c rm /probe-volume/probe-file' Jan 21 13:16:27.625: INFO: stderr: "" Jan 21 13:16:27.625: INFO: stdout: "" Jan 21 13:16:27.625: INFO: Pod exec output: �[1mSTEP:�[0m Waiting for container to restart �[38;5;243m01/21/23 13:16:27.625�[0m Jan 21 13:16:27.738: INFO: Container test-container-subpath-configmap-pmlq, restarts: 0 Jan 21 13:16:37.857: INFO: Container test-container-subpath-configmap-pmlq, restarts: 1 Jan 21 13:16:37.857: INFO: Container has restart count: 1 �[1mSTEP:�[0m Fix liveness probe �[38;5;243m01/21/23 13:16:37.857�[0m �[1mSTEP:�[0m Waiting for container to stop restarting �[38;5;243m01/21/23 13:16:37.982�[0m Jan 21 13:16:54.272: INFO: Container has restart count: 2 Jan 21 13:17:00.269: INFO: Container has restart count: 3 Jan 21 13:17:06.283: INFO: Unexpected error: while waiting for container to stabilize: <*url.Error | 0xc001a6e3c0>: { Op: "Get", URL: "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces/subpath-7660/pods/pod-subpath-test-configmap-pmlq", Err: <*net.OpError | 0xc003b28000>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc002c1d1a0>{ IP: [52, 28, 228, 130], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc003a51a00>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Jan 21 13:17:06.283: FAIL: while waiting for container to stabilize: Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces/subpath-7660/pods/pod-subpath-test-configmap-pmlq": dial tcp 52.28.228.130:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/storage/testsuites.testPodContainerRestartWithHooks(0xc000f662c0, 0xc0016b9400, 0xc000259dd0) test/e2e/storage/testsuites/subpath.go:878 +0x7fd k8s.io/kubernetes/test/e2e/storage/testsuites.TestPodContainerRestartWithConfigmapModified(0xc000f662c0, 0xc00354db00, 0xc00354dd40) test/e2e/storage/testsuites/subpath.go:937 +0x4c5 k8s.io/kubernetes/test/e2e/storage.glob..func29.2.1() test/e2e/storage/subpath.go:126 +0x179 Jan 21 13:17:06.284: INFO: Deleting pod "pod-subpath-test-configmap-pmlq" in namespace "subpath-7660" [AfterEach] [sig-storage] Subpath test/e2e/framework/framework.go:187 �[1mSTEP:�[0m Collecting events from namespace "subpath-7660". �[38;5;243m01/21/23 13:17:06.406�[0m Jan 21 13:17:06.531: INFO: Unexpected error: failed to list events in namespace "subpath-7660": <*url.Error | 0xc001bf9650>: { Op: "Get", URL: "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces/subpath-7660/events", Err: <*net.OpError | 0xc003a78820>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc001a6fc80>{ IP: [52, 28, 228, 130], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc0006dedc0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Jan 21 13:17:06.531: FAIL: failed to list events in namespace "subpath-7660": Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces/subpath-7660/events": dial tcp 52.28.228.130:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework.dumpEventsInNamespace(0xc000ded590, {0xc0031a2880, 0xc}) test/e2e/framework/util.go:901 +0x191 k8s.io/kubernetes/test/e2e/framework.DumpAllNamespaceInfo({0x7ca63f8, 0xc0004e3380}, {0xc0031a2880, 0xc}) test/e2e/framework/util.go:919 +0x8d k8s.io/kubernetes/test/e2e/framework.NewFramework.func1(0xc000f662c0, 0x2?) test/e2e/framework/framework.go:181 +0x8b k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc000f662c0) test/e2e/framework/framework.go:435 +0x21d �[1mSTEP:�[0m Destroying namespace "subpath-7660" for this suite. �[38;5;243m01/21/23 13:17:06.531�[0m Jan 21 13:17:06.653: FAIL: Couldn't delete ns: "subpath-7660": Delete "https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces/subpath-7660": dial tcp 52.28.228.130:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://api.e2e-e2e-kops-grid-flannel-flatcar-k25-ko25.test-cncf-aws.k8s.io/api/v1/namespaces/subpath-7660", Err:(*net.OpError)(0xc003b287d0)}) Full Stack Trace panic({0x6ea5bc0, 0xc0040dfd00}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1() test/e2e/framework/ginkgowrapper/wrapper.go:73 +0x7d panic({0x6ea7de0, 0xc00015df10}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail({0xc000011e00, 0xf8}, {0xc000ded048?, 0x735f76c?, 0xc000ded068?}) test/e2e/framework/ginkgowrapper/wrapper.go:77 +0x197 k8s.io/kubernetes/test/e2e/framework.Fail({0xc0002171d0, 0xe3}, {0xc000ded0e0?, 0xc003d568f0?, 0xc000ded108?}) test/e2e/framework/log.go:63 +0x145 k8s.io/kubernetes/test/e2e/framework.ExpectNoErrorWithOffset(0x1, {0x7c388e0, 0xc001bf9650}, {0xc0006df000?, 0x0?, 0x0?}) test/e2e/framework/expect.go:76 +0x267 k8s.io/kubernetes/test/e2e/framework.ExpectNoError(...) test/e2e/framework/expect.go:43 k8s.io/kubernetes/test/e2e/framework.dumpEventsInNamespace(0xc000ded590, {0xc0031a2880, 0xc}) test/e2e/framework/util.go:901 +0x191 k8s.io/kubernetes/test/e2e/framework.DumpAllNamespaceInfo({0x7ca63f8, 0xc0004e3380}, {0xc0031a2880, 0xc}) test/e2e/framework/util.go:919 +0x8d k8s.io/kubernetes/test/e2e/framework.NewFramework.func1(0xc000f662c0, 0x2?) test/e2e/framework/framework.go:181 +0x8b k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc000f662c0) test/e2e/framework/framework.go:435 +0x21d
Filter through log files | View test history on testgrid
exit status 255
from junit_runner.xml
Filter through log files | View test history on testgrid
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with different fsgroup applied to the volume contents
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with same fsgroup applied to the volume contents
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with same fsgroup skips ownership changes to the volume contents
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] provisioning should mount multiple PV pointing to the same storage on the same node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with mount options
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] volumes should store data
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (delayed binding)] topology should provision a volume and schedule a pod with AllowedTopologies
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ext4)] volumes should store data
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits should verify that all csinodes have volume limits
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (immediate binding)] topology should provision a volume and schedule a pod with AllowedTopologies
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should create read/write inline ephemeral volume
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should support expansion of pvcs created for ephemeral pvcs
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should support multiple inline ephemeral volumes
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should support two pods which have the same volume definition
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should create read-only inline ephemeral volume
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should create read/write inline ephemeral volume
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should support expansion of pvcs created for ephemeral pvcs
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should support two pods which have the same volume definition
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should create read-only inline ephemeral volume
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should create read/write inline ephemeral volume
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support expansion of pvcs created for ephemeral pvcs
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support multiple inline ephemeral volumes
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support two pods which have the same volume definition
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Generic Ephemeral-volume (default fs)] volumeLimits should verify that all csinodes have volume limits
Kubernetes e2e suite [It] [sig-api-machinery] API priority and fairness should ensure that requests can be classified by adding FlowSchema and PriorityLevelConfiguration
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceValidationRules [Privileged:ClusterAdmin] MUST NOT fail validation for create of a custom resource that satisfies the x-kubernetes-validations rules
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceValidationRules [Privileged:ClusterAdmin] MUST fail create of a custom resource definition that contains a x-kubernetes-validations rule that refers to a property that do not exist
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceValidationRules [Privileged:ClusterAdmin] MUST fail create of a custom resource definition that contains an x-kubernetes-validations rule that contains a syntax error
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceValidationRules [Privileged:ClusterAdmin] MUST fail create of a custom resource definition that contains an x-kubernetes-validations rule that exceeds the estimated cost limit
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceValidationRules [Privileged:ClusterAdmin] MUST fail create of a custom resource that exceeds the runtime cost limit for x-kubernetes-validations rule execution
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceValidationRules [Privileged:ClusterAdmin] MUST fail update of a custom resource that does not satisfy a x-kubernetes-validations transition rule
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceValidationRules [Privileged:ClusterAdmin] MUST fail validation for create of a custom resource that does not satisfy the x-kubernetes-validations rules
Kubernetes e2e suite [It] [sig-api-machinery] Discovery Custom resource should have storage version hash
Kubernetes e2e suite [It] [sig-api-machinery] Discovery should accurately determine present and missing resources
Kubernetes e2e suite [It] [sig-api-machinery] Discovery should validate PreferredVersion for each APIGroup [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Garbage collector should delete jobs and pods created by cronjob
Kubernetes e2e suite [It] [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Garbage collector should orphan pods created by rc if deleteOptions.OrphanDependents is nil
Kubernetes e2e suite [It] [sig-api-machinery] Garbage collector should support cascading deletion of custom resources
Kubernetes e2e suite [It] [sig-api-machinery] Garbage collector should support orphan deletion of custom resources
Kubernetes e2e suite [It] [sig-api-machinery] Generated clientset should create pods, set the deletionTimestamp and deletionGracePeriodSeconds of the pod
Kubernetes e2e suite [It] [sig-api-machinery] Generated clientset should create v1 cronJobs, delete cronJobs, watch cronJobs
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a custom resource.
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a persistent volume claim
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a persistent volume claim with a storage class
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should manage the lifecycle of a ResourceQuota [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should verify ResourceQuota with cross namespace pod affinity scope using scope-selectors.
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Server request timeout default timeout should be used if the specified timeout in the request URL is 0s
Kubernetes e2e suite [It] [sig-api-machinery] Server request timeout should return HTTP status code 400 if the user specifies an invalid timeout in the request URL
Kubernetes e2e suite [It] [sig-api-machinery] Server request timeout the request should be served with a default timeout if the specified timeout in the request URL exceeds maximum allowed
Kubernetes e2e suite [It] [sig-api-machinery] ServerSideApply should create an applied object if it does not already exist
Kubernetes e2e suite [It] [sig-api-machinery] ServerSideApply should give up ownership of a field if forced applied by a controller
Kubernetes e2e suite [It] [sig-api-machinery] ServerSideApply should ignore conflict errors if force apply is used
Kubernetes e2e suite [It] [sig-api-machinery] ServerSideApply should not remove a field if an owner unsets the field but other managers still have ownership of the field
Kubernetes e2e suite [It] [sig-api-machinery] ServerSideApply should remove a field if it is owned but removed in the apply request
Kubernetes e2e suite [It] [sig-api-machinery] ServerSideApply should work for CRDs
Kubernetes e2e suite [It] [sig-api-machinery] ServerSideApply should work for subresources
Kubernetes e2e suite [It] [sig-api-machinery] Servers with support for API chunking should return chunks of results for list calls
Kubernetes e2e suite [It] [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Servers with support for Table transformation should return chunks of table results for list calls
Kubernetes e2e suite [It] [sig-api-machinery] Servers with support for Table transformation should return generic metadata details across all namespaces for nodes
Kubernetes e2e suite [It] [sig-api-machinery] Servers with support for Table transformation should return pod details
Kubernetes e2e suite [It] [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] client-go should negotiate watch and report errors with accept "application/json"
Kubernetes e2e suite [It] [sig-api-machinery] client-go should negotiate watch and report errors with accept "application/json,application/vnd.kubernetes.protobuf"
Kubernetes e2e suite [It] [sig-api-machinery] client-go should negotiate watch and report errors with accept "application/vnd.kubernetes.protobuf"
Kubernetes e2e suite [It] [sig-api-machinery] client-go should negotiate watch and report errors with accept "application/vnd.kubernetes.protobuf,application/json"
Kubernetes e2e suite [It] [sig-api-machinery] health handlers should contain necessary checks
Kubernetes e2e suite [It] [sig-api-machinery] server version should find the server version [Conformance]
Kubernetes e2e suite [It] [sig-apps] CronJob should be able to schedule after more than 100 missed schedule
Kubernetes e2e suite [It] [sig-apps] CronJob should delete failed finished jobs with limit of one job
Kubernetes e2e suite [It] [sig-apps] CronJob should delete successful finished jobs with limit of one successful job
Kubernetes e2e suite [It] [sig-apps] CronJob should not emit unexpected warnings
Kubernetes e2e suite [It] [sig-apps] CronJob should remove from active list jobs that have been deleted
Kubernetes e2e suite [It] [sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]
Kubernetes e2e suite [It] [sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]
Kubernetes e2e suite [It] [sig-apps] CronJob should support timezone
Kubernetes e2e suite [It] [sig-apps] Deployment Deployment should have a working scale subresource [Conformance]
Kubernetes e2e suite [It] [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]
Kubernetes e2e suite [It] [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]
Kubernetes e2e suite [It] [sig-apps] Deployment deployment reaping should cascade to its replica sets and pods
Kubernetes e2e suite [It] [sig-apps] Deployment deployment should delete old replica sets [Conformance]
Kubernetes e2e suite [It] [sig-apps] Deployment deployment should support proportional scaling [Conformance]
Kubernetes e2e suite [It] [sig-apps] Deployment deployment should support rollover [Conformance]
Kubernetes e2e suite [It] [sig-apps] Deployment iterative rollouts should eventually progress
Kubernetes e2e suite [It] [sig-apps] Deployment should not disrupt a cloud load-balancer's connectivity during rollout
Kubernetes e2e suite [It] [sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]
Kubernetes e2e suite [It] [sig-apps] Deployment should validate Deployment Status endpoints [Conformance]
Kubernetes e2e suite [It] [sig-apps] Deployment test Deployment ReplicaSet orphaning and adoption regarding controllerRef
Kubernetes e2e suite [It] [sig-apps] DisruptionController Listing PodDisruptionBudgets for all namespaces should list and delete a collection of PodDisruptionBudgets [Conformance]
Kubernetes e2e suite [It] [sig-apps] DisruptionController evictions: enough pods, absolute => should allow an eviction
Kubernetes e2e suite [It] [sig-apps] DisruptionController evictions: enough pods, replicaSet, percentage => should allow an eviction
Kubernetes e2e suite [It] [sig-apps] DisruptionController evictions: maxUnavailable allow single eviction, percentage => should allow an eviction
Kubernetes e2e suite [It] [sig-apps] DisruptionController evictions: no PDB => should allow an eviction
Kubernetes e2e suite [It] [sig-apps] DisruptionController evictions: too few pods, absolute => should not allow an eviction
Kubernetes e2e suite [It] [sig-apps] DisruptionController should block an eviction until the PDB is updated to allow it [Conformance]
Kubernetes e2e suite [It] [sig-apps] DisruptionController should create a PodDisruptionBudget [Conformance]
Kubernetes e2e suite [It] [sig-apps] DisruptionController should observe PodDisruptionBudget status updated [Conformance]
Kubernetes e2e suite [It] [sig-apps] DisruptionController should observe that the PodDisruptionBudget status is not updated for unmanaged pods
Kubernetes e2e suite [It] [sig-apps] DisruptionController should update/patch PodDisruptionBudget status [Conformance]
Kubernetes e2e suite [It] [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]
Kubernetes e2e suite [It] [sig-apps] Job should apply changes to a job status [Conformance]
Kubernetes e2e suite [It] [sig-apps] Job should create pods for an Indexed job with completion indexes and specified hostname [Conformance]
Kubernetes e2e suite [It] [sig-apps] Job should delete a job [Conformance]
Kubernetes e2e suite [It] [sig-apps] Job should delete pods when suspended
Kubernetes e2e suite [It] [sig-apps] Job should fail to exceed backoffLimit
Kubernetes e2e suite [It] [sig-apps] Job should fail when exceeds active deadline
Kubernetes e2e suite [It] [sig-apps] Job should manage the lifecycle of a job [Conformance]
Kubernetes e2e suite [It] [sig-apps] Job should not create pods when created in suspend state
Kubernetes e2e suite [It] [sig-apps] Job should remove pods when job is deleted
Kubernetes e2e suite [It] [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
Kubernetes e2e suite [It] [sig-apps] Job should run a job to completion when tasks sometimes fail and are not locally restarted
Kubernetes e2e suite [It] [sig-apps] Job should run a job to completion when tasks succeed
Kubernetes e2e suite [It] [sig-apps] ReplicaSet Replace and Patch tests [Conformance]
Kubernetes e2e suite [It] [sig-apps] ReplicaSet Replicaset should have a working scale subresource [Conformance]
Kubernetes e2e suite [It] [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]
Kubernetes e2e suite [It] [sig-apps] ReplicaSet should list and delete a collection of ReplicaSets [Conformance]
Kubernetes e2e suite [It] [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]
Kubernetes e2e suite [It] [sig-apps] ReplicaSet should surface a failure condition on a common issue like exceeded quota
Kubernetes e2e suite [It] [sig-apps] ReplicaSet should validate Replicaset Status endpoints [Conformance]
Kubernetes e2e suite [It] [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]
Kubernetes e2e suite [It] [sig-apps] ReplicationController should release no longer matching pods [Conformance]
Kubernetes e2e suite [It] [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]
Kubernetes e2e suite [It] [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]
Kubernetes e2e suite [It] [sig-apps] ReplicationController should test the lifecycle of a ReplicationController [Conformance]
Kubernetes e2e suite [It] [sig-apps] StatefulSet AvailableReplicas should get updated accordingly when MinReadySeconds is enabled
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should adopt matching orphans and release non-matching pods
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should list, patch and delete a collection of StatefulSets [Conformance]
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should not deadlock when a pod's predecessor fails
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications with PVCs
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should validate Statefulset Status endpoints [Conformance]
Kubernetes e2e suite [It] [sig-apps] StatefulSet MinReadySeconds should be honored when enabled
Kubernetes e2e suite [It] [sig-apps] TTLAfterFinished job should be deleted once it finishes after TTL seconds
Kubernetes e2e suite [It] [sig-architecture] Conformance Tests should have at least two untainted nodes [Conformance]
Kubernetes e2e suite [It] [sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]
Kubernetes e2e suite [It] [sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR
Kubernetes e2e suite [It] [sig-auth] ServiceAccounts no secret-based service account token should be auto-generated
Kubernetes e2e suite [It] [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]
Kubernetes e2e suite [It] [sig-auth] ServiceAccounts should guarantee kube-root-ca.crt exist in any namespace [Conformance]
Kubernetes e2e suite [It] [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]
Kubernetes e2e suite [It] [sig-auth] ServiceAccounts should mount projected service account token [Conformance]
Kubernetes e2e suite [It] [sig-auth] ServiceAccounts should set ownership and permission when RunAsUser or FsGroup is present [LinuxOnly] [NodeFeature:FSGroup]
Kubernetes e2e suite [It] [sig-cli] Kubectl Port forwarding With a server listening on 0.0.0.0 should support forwarding over websockets
Kubernetes e2e suite [It] [sig-cli] Kubectl Port forwarding With a server listening on 0.0.0.0 that expects NO client request should support a client that connects, sends DATA, and disconnects
Kubernetes e2e suite [It] [sig-cli] Kubectl Port forwarding With a server listening on 0.0.0.0 that expects a client request should support a client that connects, sends DATA, and disconnects
Kubernetes e2e suite [It] [sig-cli] Kubectl Port forwarding With a server listening on 0.0.0.0 that expects a client request should support a client that connects, sends NO DATA, and disconnects
Kubernetes e2e suite [It] [sig-cli] Kubectl Port forwarding With a server listening on localhost should support forwarding over websockets
Kubernetes e2e suite [It] [sig-cli] Kubectl Port forwarding With a server listening on localhost that expects NO client request should support a client that connects, sends DATA, and disconnects
Kubernetes e2e suite [It] [sig-cli] Kubectl Port forwarding With a server listening on localhost that expects a client request should support a client that connects, sends DATA, and disconnects
Kubernetes e2e suite [It] [sig-cli] Kubectl Port forwarding With a server listening on localhost that expects a client request should support a client that connects, sends NO DATA, and disconnects
Kubernetes e2e suite [It] [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl apply apply set/view last-applied
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl apply should apply a new configuration to an existing RC
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl apply should reuse port when apply to an existing SVC
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl cluster-info dump should check if cluster-info dump succeeds
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes control plane services is included in cluster-info [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl copy should copy a file from a running Pod
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl create quota should create a quota with scopes
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl create quota should create a quota without scopes
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl create quota should reject quota with invalid scopes
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for cronjob
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl get componentstatuses should get componentstatuses
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl validation should create/apply a CR with unknown fields for CRD with no validation schema
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl validation should create/apply a valid CR for CRD with validation schema
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl validation should create/apply an invalid/valid CR with arbitrary-extra properties for CRD with partially-specified validation schema
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl validation should detect unknown metadata fields in both the root and embedded object of a CR
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl validation should detect unknown metadata fields of a typed object
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Simple pod should contain last line of the log
Kubernetes e2e suite [It] [sig-cli] Kubectl client Simple pod should handle in-cluster config
Kubernetes e2e suite [It] [sig-cli] Kubectl client Simple pod should return command exit codes execing into a container with a failing command
Kubernetes e2e suite [It] [sig-cli] Kubectl client Simple pod should return command exit codes execing into a container with a successful command
Kubernetes e2e suite [It] [sig-cli] Kubectl client Simple pod should return command exit codes running a failing command
Kubernetes e2e suite [It] [sig-cli] Kubectl client Simple pod should return command exit codes running a successful command
Kubernetes e2e suite [It] [sig-cli] Kubectl client Simple pod should support exec
Kubernetes e2e suite [It] [sig-cli] Kubectl client Simple pod should support exec through an HTTP proxy
Kubernetes e2e suite [It] [sig-cli] Kubectl client Simple pod should support exec through kubectl proxy
Kubernetes e2e suite [It] [sig-cli] Kubectl client Simple pod should support exec using resource/name
Kubernetes e2e suite [It] [sig-cli] Kubectl client Simple pod should support inline execution and attach
Kubernetes e2e suite [It] [sig-cli] Kubectl client Simple pod should support port-forward
Kubernetes e2e suite [It] [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client kubectl wait should ignore not found error with --for=delete
Kubernetes e2e suite [It] [sig-instrumentation] Events API should delete a collection of events [Conformance]
Kubernetes e2e suite [It] [sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]
Kubernetes e2e suite [It] [sig-instrumentation] Events should delete a collection of events [Conformance]
Kubernetes e2e suite [It] [sig-instrumentation] Events should manage the lifecycle of an event [Conformance]
Kubernetes e2e suite [It] [sig-instrumentation] MetricsGrabber should grab all metrics from a ControllerManager.
Kubernetes e2e suite [It] [sig-instrumentation] MetricsGrabber should grab all metrics from a Kubelet.
Kubernetes e2e suite [It] [sig-instrumentation] MetricsGrabber should grab all metrics from a Scheduler.
Kubernetes e2e suite [It] [sig-network] CVE-2021-29923 IPv4 Service Type ClusterIP with leading zeros should work interpreted as decimal
Kubernetes e2e suite [It] [sig-network] Conntrack should be able to preserve UDP traffic when initial unready endpoints get ready
Kubernetes e2e suite [It] [sig-network] Conntrack should be able to preserve UDP traffic when server pod cycles for a ClusterIP service
Kubernetes e2e suite [It] [sig-network] Conntrack should be able to preserve UDP traffic when server pod cycles for a NodePort service
Kubernetes e2e suite [It] [sig-network] Conntrack should drop INVALID conntrack entries [Privileged]
Kubernetes e2e suite [It] [sig-network] DNS should provide /etc/hosts entries for the cluster [Conformance]
Kubernetes e2e suite [It] [sig-network] DNS should provide DNS for ExternalName services [Conformance]
Kubernetes e2e suite [It] [sig-network] DNS should provide DNS for pods for Hostname [Conformance]
Kubernetes e2e suite [It] [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]
Kubernetes e2e suite [It] [sig-network] DNS should provide DNS for the cluster [Conformance]
Kubernetes e2e suite [It] [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
Kubernetes e2e suite [It] [sig-network] DNS should resolve DNS of partial qualified names for the cluster [LinuxOnly]
Kubernetes e2e suite [It] [sig-network] DNS should support configurable pod DNS nameservers [Conformance]
Kubernetes e2e suite [It] [sig-network] DNS should support configurable pod resolv.conf
Kubernetes e2e suite [It] [sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]
Kubernetes e2e suite [It] [sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]
Kubernetes e2e suite [It] [sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]
Kubernetes e2e suite [It] [sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance]
Kubernetes e2e suite [It] [sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]
Kubernetes e2e suite [It] [sig-network] HostPort validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance]
Kubernetes e2e suite [It] [sig-network] Ingress API should support creating Ingress API operations [Conformance]
Kubernetes e2e suite [It] [sig-network] IngressClass API should support creating IngressClass API operations [Conformance]
Kubernetes e2e suite [It] [sig-network] Netpol API should support creating NetworkPolicy API with endport field
Kubernetes e2e suite [It] [sig-network] NetworkPolicy API should support creating NetworkPolicy API operations
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-network] Networking should provide unchanging, static URL paths for kubernetes api services
Kubernetes e2e suite [It] [sig-network] Proxy version v1 A set of valid responses are returned for both pod and service Proxy [Conformance]
Kubernetes e2e suite [It] [sig-network] Proxy version v1 A set of valid responses are returned for both pod and service ProxyWithPath [Conformance]
Kubernetes e2e suite [It] [sig-network] Proxy version v1 should proxy logs on node using proxy subresource
Kubernetes e2e suite [It] [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource
Kubernetes e2e suite [It] [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance]
Kubernetes e2e suite [It] [sig-network] SCTP [LinuxOnly] should allow creating a basic SCTP service with pod and endpoints
Kubernetes e2e suite [It] [sig-network] SCTP [LinuxOnly] should create a ClusterIP Service with SCTP ports
Kubernetes e2e suite [It] [sig-network] SCTP [LinuxOnly] should create a Pod with SCTP HostPort
Kubernetes e2e suite [It] [sig-network] Service endpoints latency should not be very high [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should allow pods to hairpin back to themselves through services
Kubernetes e2e suite [It] [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should be able to connect to terminating and unready endpoints if PublishNotReadyAddresses is true
Kubernetes e2e suite [It] [sig-network] Services should be able to create a functioning NodePort service [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should be able to update service type to NodePort listening on same port number but different protocols
Kubernetes e2e suite [It] [sig-network] Services should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node
Kubernetes e2e suite [It] [sig-network] Services should be rejected for evicted pods (no endpoints exist)
Kubernetes e2e suite [It] [sig-network] Services should be rejected when no endpoints exist
Kubernetes e2e suite [It] [sig-network] Services should be updated after adding or deleting ports
Kubernetes e2e suite [It] [sig-network] Services should check NodePort out-of-range
Kubernetes e2e suite [It] [sig-network] Services should complete a service status lifecycle [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should create endpoints for unready pods
Kubernetes e2e suite [It] [sig-network] Services should delete a collection of services [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should find a service from listing all namespaces [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should implement service.kubernetes.io/headless
Kubernetes e2e suite [It] [sig-network] Services should implement service.kubernetes.io/service-proxy-name
Kubernetes e2e suite [It] [sig-network] Services should not be able to connect to terminating and unready endpoints if PublishNotReadyAddresses is false
Kubernetes e2e suite [It] [sig-network] Services should preserve source pod IP for traffic thru service cluster IP [LinuxOnly]
Kubernetes e2e suite [It] [sig-network] Services should prevent NodePort collisions
Kubernetes e2e suite [It] [sig-network] Services should provide secure master service [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should release NodePorts on delete
Kubernetes e2e suite [It] [sig-network] Services should serve a basic endpoint from pods [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should serve multiport endpoints from pods [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should test the lifecycle of an Endpoint [Conformance]
Kubernetes e2e suite [It] [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]
Kubernetes e2e suite [It] [sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]
Kubernetes e2e suite [It] [sig-node] ConfigMap should update ConfigMap successfully
Kubernetes e2e suite [It] [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Container Runtime blackbox test on terminated container should report termination message as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Container Runtime blackbox test on terminated container should report termination message from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Container Runtime blackbox test on terminated container should report termination message from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Container Runtime blackbox test on terminated container should report termination message if TerminationMessagePath is set [NodeConformance]
Kubernetes e2e suite [It] [sig-node] Container Runtime blackbox test on terminated container should report termination message if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Container Runtime blackbox test when running a container with a new image should be able to pull from private registry with secret [NodeConformance]
Kubernetes e2e suite [It] [sig-node] Container Runtime blackbox test when running a container with a new image should be able to pull image [NodeConformance]
Kubernetes e2e suite [It] [sig-node] Container Runtime blackbox test when running a container with a new image should not be able to pull from private registry without secret [NodeConformance]
Kubernetes e2e suite [It] [sig-node] Container Runtime blackbox test when running a container with a new image should not be able to pull image from invalid registry [NodeConformance]
Kubernetes e2e suite [It] [sig-node] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Containers should be able to override the image's default arguments (container cmd) [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Containers should be able to override the image's default command (container entrypoint) [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Downward API should provide host IP and pod IP as an env var if pod uses host network [LinuxOnly]
Kubernetes e2e suite [It] [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Ephemeral Containers [NodeConformance] will start an ephemeral container in an existing pod [Conformance]
Kubernetes e2e suite [It] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running
Kubernetes e2e suite [It] [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]
Kubernetes e2e suite [It] [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]
Kubernetes e2e suite [It] [sig-node] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
Kubernetes e2e suite [It] [sig-node] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
Kubernetes e2e suite [It] [sig-node] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Kubelet when scheduling an agnhost Pod with hostAliases should write entries to /etc/hosts [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Lease lease API should be available [Conformance]
Kubernetes e2e suite [It] [sig-node] Mount propagation should propagate mounts within defined scopes
Kubernetes e2e suite [It] [sig-node] NodeLease NodeLease should have OwnerReferences set
Kubernetes e2e suite [It] [sig-node] NodeLease NodeLease the kubelet should create and update a lease in the kube-node-lease namespace
Kubernetes e2e suite [It] [sig-node] NodeLease NodeLease the kubelet should report node status infrequently
Kubernetes e2e suite [It] [sig-node] PodOSRejection [NodeConformance] Kubelet should reject pod when the node OS doesn't match pod's OS
Kubernetes e2e suite [It] [sig-node] PodTemplates should delete a collection of pod templates [Conformance]
Kubernetes e2e suite [It] [sig-node] PodTemplates should replace a pod template [Conformance]
Kubernetes e2e suite [It] [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance]
Kubernetes e2e suite [It] [sig-node] Pods Extended Delete Grace Period should be submitted and removed
Kubernetes e2e suite [It] [sig-node] Pods Extended Pod Container Status should never report container start when an init container fails
Kubernetes e2e suite [It] [sig-node] Pods Extended Pod Container Status should never report success for a pending container
Kubernetes e2e suite [It] [sig-node] Pods Extended Pod Container lifecycle evicted pods should be terminal
Kubernetes e2e suite [It] [sig-node] Pods Extended Pod Container lifecycle should not create extra sandbox if all containers are done
Kubernetes e2e suite [It] [sig-node] Pods Extended Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]
Kubernetes e2e suite [It] [sig-node] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Pods should be submitted and removed [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Pods should be updated [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Pods should contain environment variables for services [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Pods should delete a collection of pods [Conformance]
Kubernetes e2e suite [It] [sig-node] Pods should get a host IP [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Pods should patch a pod status [Conformance]
Kubernetes e2e suite [It] [sig-node] Pods should run through the lifecycle of Pods and PodStatus [Conformance]
Kubernetes e2e suite [It] [sig-node] Pods should support pod readiness gates [NodeConformance]
Kubernetes e2e suite [It] [sig-node] Pods should support remote command execution over websockets [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] PreStop graceful pod terminated should wait until preStop hook completes the process
Kubernetes e2e suite [It] [sig-node] PreStop should call prestop when killing a pod [Conformance]
Kubernetes e2e suite [It] [sig-node] PrivilegedPod [NodeConformance] should enable privileged commands [LinuxOnly]
Kubernetes e2e suite [It] [sig-node] Probing container should *not* be restarted by liveness probe because startup probe delays it
Kubernetes e2e suite [It] [sig-node] Probing container should *not* be restarted with a GRPC liveness probe [NodeConformance]
Kubernetes e2e suite [It] [sig-node] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Probing container should *not* be restarted with a non-local redirect http liveness probe
Kubernetes e2e suite [It] [sig-node] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Probing container should be ready immediately after startupProbe succeeds
Kubernetes e2e suite [It] [sig-node] Probing container should be restarted by liveness probe after startup probe enables it
Kubernetes e2e suite [It] [sig-node] Probing container should be restarted startup probe fails
Kubernetes e2e suite [It] [sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Probing container should be restarted with a GRPC liveness probe [NodeConformance]
Kubernetes e2e suite [It] [sig-node] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Probing container should be restarted with a failing exec liveness probe that took longer than the timeout
Kubernetes e2e suite [It] [sig-node] Probing container should be restarted with a local redirect http liveness probe
Kubernetes e2e suite [It] [sig-node] Probing container should be restarted with an exec liveness probe with timeout [MinimumKubeletVersion:1.20] [NodeConformance]
Kubernetes e2e suite [It] [sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Probing container should mark readiness on pods to false while pod is in progress of terminating when a pod has a readiness probe
Kubernetes e2e suite [It] [sig-node] Probing container should not be ready with an exec readiness probe timeout [MinimumKubeletVersion:1.20] [NodeConformance]
Kubernetes e2e suite [It] [sig-node] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] RuntimeClass should support RuntimeClasses API operations [Conformance]
Kubernetes e2e suite [It] [sig-node] RuntimeClass should reject a Pod requesting a RuntimeClass with conflicting node selector
Kubernetes e2e suite [It] [sig-node] RuntimeClass should reject a Pod requesting a deleted RuntimeClass [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] RuntimeClass should reject a Pod requesting a non-existent RuntimeClass [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] RuntimeClass should schedule a Pod requesting a RuntimeClass and initialize its Overhead [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] RuntimeClass should schedule a Pod requesting a RuntimeClass without PodOverhead [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] SSH should SSH to all nodes and run commands
Kubernetes e2e suite [It] [sig-node] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Secrets should be consumable via the environment [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Secrets should fail to create secret due to empty secret key [Conformance]
Kubernetes e2e suite [It] [sig-node] Secrets should patch a secret [Conformance]
Kubernetes e2e suite [It] [sig-node] Security Context When creating a container with runAsNonRoot should not run with an explicit root user ID [LinuxOnly]
Kubernetes e2e suite [It] [sig-node] Security Context When creating a container with runAsNonRoot should not run without a specified user ID
Kubernetes e2e suite [It] [sig-node] Security Context When creating a container with runAsNonRoot should run with an explicit non-root user ID [LinuxOnly]
Kubernetes e2e suite [It] [sig-node] Security Context When creating a container with runAsNonRoot should run with an image specified user ID
Kubernetes e2e suite [It] [sig-node] Security Context When creating a container with runAsUser should run the container with uid 0 [LinuxOnly] [NodeConformance]
Kubernetes e2e suite [It] [sig-node] Security Context When creating a pod with privileged should run the container as privileged when true [LinuxOnly] [NodeFeature:HostAccess]
Kubernetes e2e suite [It] [sig-node] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Security Context When creating a pod with readOnlyRootFilesystem should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance]
Kubernetes e2e suite [It] [sig-node] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Security Context should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]
Kubernetes e2e suite [It] [sig-node] Security Context should support container.SecurityContext.RunAsUser [LinuxOnly]
Kubernetes e2e suite [It] [sig-node] Security Context should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]
Kubernetes e2e suite [It] [sig-node] Security Context should support pod.Spec.SecurityContext.RunAsUser [LinuxOnly]
Kubernetes e2e suite [It] [sig-node] Security Context should support pod.Spec.SecurityContext.SupplementalGroups [LinuxOnly]
Kubernetes e2e suite [It] [sig-node] Security Context should support seccomp default which is unconfined [LinuxOnly]
Kubernetes e2e suite [It] [sig-node] Security Context should support seccomp runtime/default [LinuxOnly]
Kubernetes e2e suite [It] [sig-node] Security Context should support seccomp unconfined on the container [LinuxOnly]
Kubernetes e2e suite [It] [sig-node] Security Context should support seccomp unconfined on the pod [LinuxOnly]
Kubernetes e2e suite [It] [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance]
Kubernetes e2e suite [It] [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when true [LinuxOnly] [NodeConformance]
Kubernetes e2e suite [It] [sig-node] Sysctls [LinuxOnly] [NodeConformance] should not launch unsafe, but not explicitly enabled sysctls on the node [MinimumKubeletVersion:1.21]
Kubernetes e2e suite [It] [sig-node] Sysctls [LinuxOnly] [NodeConformance] should reject invalid sysctls [MinimumKubeletVersion:1.21] [Conformance]
Kubernetes e2e suite [It] [sig-node] Sysctls [LinuxOnly] [NodeConformance] should support sysctls [MinimumKubeletVersion:1.21] [Conformance]
Kubernetes e2e suite [It] [sig-node] Sysctls [LinuxOnly] [NodeConformance] should support sysctls with slashes as separator [MinimumKubeletVersion:1.23]
Kubernetes e2e suite [It] [sig-node] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Variable Expansion should allow substituting values in a volume subpath [Conformance]
Kubernetes e2e suite [It] [sig-node] kubelet Clean up pods on node kubelet should be able to delete 10 pods per node in 1m0s.
Kubernetes e2e suite [It] [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should create read/write inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should support multiple inline ephemeral volumes
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should support two pods which have the same volume definition
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with pvc data source
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] provisioning should mount multiple PV pointing to the same storage on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits should verify that all csinodes have volume limits
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should create read/write inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should support expansion of pvcs created for ephemeral pvcs
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should support multiple inline ephemeral volumes
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should support two pods which have the same volume definition
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should create read-only inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should create read/write inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should support expansion of pvcs created for ephemeral pvcs
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should support two pods which have the same volume definition
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should create read-only inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should create read/write inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support expansion of pvcs created for ephemeral pvcs
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support multiple inline ephemeral volumes
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support two pods which have the same volume definition
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs)] volumeLimits should verify that all csinodes have volume limits
Kubernetes e2e suite [It] [sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should modify fsGroup if fsGroupPolicy=File
Kubernetes e2e suite [It] [sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should modify fsGroup if fsGroupPolicy=default
Kubernetes e2e suite [It] [sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should not modify fsGroup if fsGroupPolicy=None
Kubernetes e2e suite [It] [sig-storage] CSI mock volume CSI Volume expansion should expand volume by restarting pod if attach=off, nodeExpansion=on
Kubernetes e2e suite [It] [sig-storage] CSI mock volume CSI Volume expansion should expand volume by restarting pod if attach=on, nodeExpansion=on
Kubernetes e2e suite [It] [sig-storage] CSI mock volume CSI Volume expansion should expand volume without restarting pod if nodeExpansion=off
Kubernetes e2e suite [It] [sig-storage] CSI mock volume CSI Volume expansion should not expand volume if resizingOnDriver=off, resizingOnSC=on
Kubernetes e2e suite [It] [sig-storage] CSI mock volume CSI attach test using mock driver should preserve attachment policy when no CSIDriver present
Kubernetes e2e suite [It] [sig-storage] CSI mock volume CSI attach test using mock driver should require VolumeAttach for drivers with attachment
Kubernetes e2e suite [It] [sig-storage] CSI mock volume CSI attach test using mock driver should require VolumeAttach for ephemermal volume and drivers with attachment
Kubernetes e2e suite [It] [sig-storage] CSI mock volume CSI online volume expansion should expand volume without restarting pod if attach=off, nodeExpansion=on
Kubernetes e2e suite [It] [sig-storage] CSI mock volume CSI online volume expansion should expand volume without restarting pod if attach=on, nodeExpansion=on
Kubernetes e2e suite [It] [sig-storage] CSI mock volume CSI workload information using mock driver contain ephemeral=true when using inline volume
Kubernetes e2e suite [It] [sig-storage] CSI mock volume CSI workload information using mock driver should be passed when podInfoOnMount=true
Kubernetes e2e suite [It] [sig-storage] CSI mock volume CSI workload information using mock driver should not be passed when CSIDriver does not exist
Kubernetes e2e suite [It] [sig-storage] CSI mock volume CSI workload information using mock driver should not be passed when podInfoOnMount=nil
Kubernetes e2e suite [It] [sig-storage] CSI mock volume CSIServiceAccountToken token should be plumbed down when csiServiceAccountTokenEnabled=true
Kubernetes e2e suite [It] [sig-storage] CSI mock volume CSIServiceAccountToken token should not be plumbed down when CSIDriver is not deployed
Kubernetes e2e suite [It] [sig-storage] CSI mock volume CSIServiceAccountToken token should not be plumbed down when csiServiceAccountTokenEnabled=false
Kubernetes e2e suite [It] [sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity disabled
Kubernetes e2e suite [It] [sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity unused
Kubernetes e2e suite [It] [sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, have capacity
Kubernetes e2e suite [It] [sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, insufficient capacity
Kubernetes e2e suite [It] [sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, no capacity
Kubernetes e2e suite [It] [sig-storage] CSI mock volume Delegate FSGroup to CSI driver [LinuxOnly] should not pass FSGroup to CSI driver if it is set in pod and driver supports VOLUME_MOUNT_GROUP
Kubernetes e2e suite [It] [sig-storage] CSI mock volume Delegate FSGroup to CSI driver [LinuxOnly] should pass FSGroup to CSI driver if it is set in pod and driver supports VOLUME_MOUNT_GROUP
Kubernetes e2e suite [It] [sig-storage] CSI mock volume storage capacity exhausted, immediate binding
Kubernetes e2e suite [It] [sig-storage] CSI mock volume storage capacity exhausted, late binding, no topology
Kubernetes e2e suite [It] [sig-storage] CSI mock volume storage capacity exhausted, late binding, with topology
Kubernetes e2e suite [It] [sig-storage] CSI mock volume storage capacity unlimited
Kubernetes e2e suite [It] [sig-storage] CSIInlineVolumes should support CSIVolumeSource in Pod API
Kubernetes e2e suite [It] [sig-storage] CSIInlineVolumes should support ephemeral VolumeLifecycleMode in CSIDriver API
Kubernetes e2e suite [It] [sig-storage] CSIStorageCapacity should support CSIStorageCapacities API operations [Conformance]
Kubernetes e2e suite [It] [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] ConfigMap should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
Kubernetes e2e suite [It] [sig-storage] ConfigMap should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup]
Kubernetes e2e suite [It] [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
Kubernetes e2e suite [It] [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]
Kubernetes e2e suite [It] [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Downward API volume should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]
Kubernetes e2e suite [It] [sig-storage] Downward API volume should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup]
Kubernetes e2e suite [It] [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes pod should support memory backed volumes of specified size
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] files with FSGroup ownership should support (root,0644,tmpfs)
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] new files should be created with FSGroup ownership when container is non-root
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] new files should be created with FSGroup ownership when container is root
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] nonexistent volume subPath should have the correct mode and owner using FSGroup
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] volume on default medium should have the correct mode using FSGroup
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] volume on tmpfs should have the correct mode using FSGroup
Kubernetes e2e suite [It] [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]
Kubernetes e2e suite [It] [sig-storage] Ephemeralstorage When pod refers to non-existent ephemeral storage should allow deletion of pod with invalid volume : configmap
Kubernetes e2e suite [It] [sig-storage] Ephemeralstorage When pod refers to non-existent ephemeral storage should allow deletion of pod with invalid volume : secret
Kubernetes e2e suite [It] [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance]
Kubernetes e2e suite [It] [sig-storage] HostPath should support r/w [NodeConformance]
Kubernetes e2e suite [It] [sig-storage] HostPath should support subPath [NodeConformance]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support existing directory
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support existing directory
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support existing directory
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (block volmode)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (ext4)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] PV Protection Verify "immediate" deletion of a PV that is not bound to a PVC
Kubernetes e2e suite [It] [sig-storage] PV Protection Verify that PV bound to a PVC is not removed immediately
Kubernetes e2e suite [It] [sig-storage] PVC Protection Verify "immediate" deletion of a PVC that is not in active use by a pod
Kubernetes e2e suite [It] [sig-storage] PVC Protection Verify that PVC in active use by a pod is not removed immediately
Kubernetes e2e suite [It] [sig-storage] PVC Protection Verify that scheduling of a pod that uses PVC that is being deleted fails and the pod becomes Unschedulable
Kubernetes e2e suite [It] [sig-storage] PersistentVolumes-local Pod with node different from PV's NodeAffinity should fail scheduling due to different NodeAffinity
Kubernetes e2e suite [It] [sig-storage] PersistentVolumes-local Pod with node different from PV's NodeAffinity should fail scheduling due to different NodeSelector
Kubernetes e2e suite [It] [sig-storage] PersistentVolumes-local [Volume type: block] One pod requesting one prebound PVC should be able to mount volume and read from pod1
Kubernetes e2e suite [It] [sig-storage] PersistentVolumes-local [Volume type: block] One pod requesting one prebound PVC should be able to mount volume and write from pod1
Kubernetes e2e suite [It] [sig-storage] PersistentVolumes-local [Volume type: block] One pod requesting one prebound PVC should be able to mount volume and write from pod1
Kubernetes e2e suite [It] [sig-storage] PersistentVolumes-local [Volume type: block] One pod requesting one prebound PVC should be able to mount volume and write from pod1
Kubernetes e2e suite [It] [sig-storage] PersistentVolumes-local [Volume type: blockfswithformat] One pod requesting one prebound PVC should be able to mount volume and read from pod1
Kubernetes e2e suite [It] [sig-storage] PersistentVolumes-local [Volume type: blockfswithformat] One pod requesting one prebound PVC should be able to mount volume and read from pod1
Kubernetes e2e suite [It] [sig-storage] PersistentVolumes-local [Volume type: blockfswithformat] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2
Kubernetes e2e suite [It] [sig-storage] PersistentVolumes-local [Volume type: blockfswithformat] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2
Kubernetes e2e suite [It] [sig-storage] PersistentVolumes-local [Volume type: blockfswithoutformat] One pod requesting one prebound PVC should be able to mount volume and read from pod1
Kubernetes e2e suite [It] [sig-storage] PersistentVolumes-local [Volume type: blockfswithoutformat] One pod requesting one prebound PVC should be able to mount volume and write from pod1
Kubernetes e2e suite [It] [sig-storage] PersistentVolumes-local [Volume type: blockfswithoutformat] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2
Kubernetes e2e suite [It] [sig-storage] PersistentVolumes-local [Volume type: blockfswithoutformat] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2
Kubernetes e2e suite [It] [sig-storage] PersistentVolumes-local [Volume type: blockfswithoutformat] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2
Kubernetes e2e suite [It] [sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] One pod requesting one prebound PVC should be able to mount volume and read from pod1
Kubernetes e2e suite [It] [sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] One pod requesting one prebound PVC should be able to mount volume and write from pod1
Kubernetes e2e suite [It] [sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2
Kubernetes e2e suite [It] [sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2
Kubernetes e2e suite [It] [sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2
Kubernetes e2e suite [It] [sig-storage] PersistentVolumes-local [Volume type: dir-link-bindmounted] One pod requesting one prebound PVC should be able to mount volume and read from pod1
Kubernetes e2e suite [It] [sig-storage] PersistentVolumes-local [Volume type: dir-link-bindmounted] One pod requesting one prebound PVC should be able to mount volume and write from pod1
Kubernetes e2e suite [It] [sig-storage] PersistentVolumes-local [Volume type: dir-link-bindmounted] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2
Kubernetes e2e suite [It] [sig-storage] PersistentVolumes-local [Volume type: dir-link] One pod requesting one prebound PVC should be able to mount volume and write from pod1
Kubernetes e2e suite [It] [sig-storage] PersistentVolumes-local [Volume type: dir-link] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2
Kubernetes e2e suite [It] [sig-storage] PersistentVolumes-local [Volume type: dir] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2
Kubernetes e2e suite [It] [sig-storage] PersistentVolumes-local [Volume type: dir] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2
Kubernetes e2e suite [It] [sig-storage] PersistentVolumes-local [Volume type: tmpfs] One pod requesting one prebound PVC should be able to mount volume and read from pod1
Kubernetes e2e suite [It] [sig-storage] PersistentVolumes-local [Volume type: tmpfs] One pod requesting one prebound PVC should be able to mount volume and write from pod1
Kubernetes e2e suite [It] [sig-storage] PersistentVolumes-local [Volume type: tmpfs] One pod requesting one prebound PVC should be able to mount volume and write from pod1
Kubernetes e2e suite [It] [sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2
Kubernetes e2e suite [It] [sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2
Kubernetes e2e suite [It] [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected configMap should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
Kubernetes e2e suite [It] [sig-storage] Projected configMap should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup]
Kubernetes e2e suite [It] [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
Kubernetes e2e suite [It] [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected downwardAPI should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]
Kubernetes e2e suite [It] [sig-storage] Projected downwardAPI should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup]
Kubernetes e2e suite [It] [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected secret should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance]
Kubernetes e2e suite [It] [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Secrets should be immutable if `immutable` field is set [Conformance]
Kubernetes e2e suite [It] [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [Conformance]
Kubernetes e2e suite [It] [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [Conformance]
Kubernetes e2e suite [It] [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [Conformance]
Kubernetes e2e suite [It] [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [Conformance]
Kubernetes e2e suite [It] [sig-storage] Volumes ConfigMap should be mountable
Kubernetes e2e suite [ReportAfterSuite] Kubernetes e2e suite report
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
kubetest2 Down
kubetest2 Up
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should create read-only inline ephemeral volume
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should create read/write inline ephemeral volume
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should support expansion of pvcs created for ephemeral pvcs
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should support multiple inline ephemeral volumes
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should support two pods which have the same volume definition
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)] provisioning should mount multiple PV pointing to the same storage on the same node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with any volume data source [Serial]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with mount options
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with pvc data source
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with pvc data source in parallel [Slow]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)] volume-stress multiple pods should access different volumes repeatedly [Slow] [Serial]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] capacity provides storage capacity information
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with any volume data source [Serial]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source in parallel [Slow]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] volume-stress multiple pods should access different volumes repeatedly [Slow] [Serial]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (delayed binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ext3)] volumes should store data
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (filesystem volmode)] volume-lifecycle-performance should provision volumes at scale within performance constraints [Slow] [Serial]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits should support volume limits [Serial]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (immediate binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should mount multiple PV pointing to the same storage on the same node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with any volume data source [Serial]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with mount options
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with pvc data source
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with pvc data source in parallel [Slow]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support existing directory
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support non-existent path
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes should store data
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (xfs)][Slow] volumes should store data
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic Snapshot (delete policy)] snapshottable-stress[Feature:VolumeSnapshotDataSource] should support snapshotting of many volumes repeatedly [Slow] [Serial]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works, check deletion (ephemeral)
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic Snapshot (retain policy)] snapshottable-stress[Feature:VolumeSnapshotDataSource] should support snapshotting of many volumes repeatedly [Slow] [Serial]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works, check deletion (ephemeral)
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Ephemeral Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Ephemeral Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works, check deletion (ephemeral)
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Ephemeral Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Ephemeral Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works, check deletion (ephemeral)
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should create read-only inline ephemeral volume
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should support multiple inline ephemeral volumes
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Generic Ephemeral-volume (default fs)] volumeLimits should support volume limits [Serial]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (default fs)] subPath should support existing directory
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (default fs)] volumes should store data
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (ext3)] volumes should store data
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (ext4)] volumes should store data
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes should store data
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (xfs)][Slow] volumes should store data
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (block volmode)] volumes should store data
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (ext3)] volumes should store data
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (ext4)] volumes should store data
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes should store data
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (xfs)][Slow] volumes should store data
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works, check deletion (ephemeral)
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works, check deletion (ephemeral)
Kubernetes e2e suite [It] [sig-api-machinery] API priority and fairness should ensure that requests can't be drowned out (fairness)
Kubernetes e2e suite [It] [sig-api-machinery] API priority and fairness should ensure that requests can't be drowned out (priority)
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] [Flaky] kubectl explain works for CR with the same resource name as built-in object.
Kubernetes e2e suite [It] [sig-api-machinery] Etcd failure [Disruptive] should recover from SIGKILL
Kubernetes e2e suite [It] [sig-api-machinery] Etcd failure [Disruptive] should recover from network partition with master
Kubernetes e2e suite [It] [sig-api-machinery] Namespaces [Serial] should always delete fast (ALL of 100 namespaces in 150 seconds) [Feature:ComprehensiveNamespaceDraining]
Kubernetes e2e suite [It] [sig-api-machinery] Namespaces [Serial] should apply changes to a namespace status [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Namespaces [Serial] should delete fast enough (90 percent of 100 namespaces in 150 seconds)
Kubernetes e2e suite [It] [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota [Feature:PodPriority] should verify ResourceQuota's multiple priority class scope (quota set to pod count: 2) against 2 pods with same priority classes.
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota [Feature:PodPriority] should verify ResourceQuota's priority class scope (cpu, memory quota set) against a pod with same priority class.
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota [Feature:PodPriority] should verify ResourceQuota's priority class scope (quota set to pod count: 1) against 2 pods with different priority class.
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota [Feature:PodPriority] should verify ResourceQuota's priority class scope (quota set to pod count: 1) against 2 pods with same priority class.
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota [Feature:PodPriority] should verify ResourceQuota's priority class scope (quota set to pod count: 1) against a pod with different priority class (ScopeSelectorOpExists).
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota [Feature:PodPriority] should verify ResourceQuota's priority class scope (quota set to pod count: 1) against a pod with different priority class (ScopeSelectorOpNotIn).
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota [Feature:PodPriority] should verify ResourceQuota's priority class scope (quota set to pod count: 1) against a pod with same priority class.
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota [Feature:ScopeSelectors] should verify ResourceQuota with best effort scope using scope-selectors.
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota [Feature:ScopeSelectors] should verify ResourceQuota with terminating scopes through scope selectors.
Kubernetes e2e suite [It] [sig-api-machinery] Servers with support for API chunking should support continue listing from the last key if the original version has been compacted away, though the list is inconsistent [Slow]
Kubernetes e2e suite [It] [sig-api-machinery] StorageVersion resources [Feature:StorageVersionAPI] storage version with non-existing id should be GC'ed
Kubernetes e2e suite [It] [sig-apps] ControllerRevision [Serial] should manage the lifecycle of a ControllerRevision [Conformance]
Kubernetes e2e suite [It] [sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]
Kubernetes e2e suite [It] [sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]
Kubernetes e2e suite [It] [sig-apps] Daemon set [Serial] should list and delete a collection of DaemonSets [Conformance]
Kubernetes e2e suite [It] [sig-apps] Daemon set [Serial] should not update pod when spec was updated and update strategy is OnDelete
Kubernetes e2e suite [It] [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]
Kubernetes e2e suite [It] [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]
Kubernetes e2e suite [It] [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]
Kubernetes e2e suite [It] [sig-apps] Daemon set [Serial] should run and stop complex daemon with node affinity
Kubernetes e2e suite [It] [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]
Kubernetes e2e suite [It] [sig-apps] Daemon set [Serial] should surge pods onto nodes when spec was updated and update strategy is RollingUpdate
Kubernetes e2e suite [It] [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
Kubernetes e2e suite [It] [sig-apps] Daemon set [Serial] should verify changes to a daemon set status [Conformance]
Kubernetes e2e suite [It] [sig-apps] DaemonRestart [Disruptive] Controller Manager should not create/delete replicas across restart
Kubernetes e2e suite [It] [sig-apps] DaemonRestart [Disruptive] Kube-proxy should recover after being killed accidentally
Kubernetes e2e suite [It] [sig-apps] DaemonRestart [Disruptive] Kubelet should not restart containers across restart
Kubernetes e2e suite [It] [sig-apps] DaemonRestart [Disruptive] Scheduler should continue assigning pods to nodes across restart
Kubernetes e2e suite [It] [sig-apps] DisruptionController evictions: maxUnavailable deny evictions, integer => should not allow an eviction [Serial]
Kubernetes e2e suite [It] [sig-apps] DisruptionController evictions: too few pods, replicaSet, percentage => should not allow an eviction [Serial]
Kubernetes e2e suite [It] [sig-apps] Job should run a job to completion with CPU requests [Serial]
Kubernetes e2e suite [It] [sig-apps] ReplicaSet should serve a basic image on each replica with a private image
Kubernetes e2e suite [It] [sig-apps] ReplicationController should serve a basic image on each replica with a private image
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
Kubernetes e2e suite [It] [sig-apps] StatefulSet Deploy clustered applications [Feature:StatefulSet] [Slow] should creating a working CockroachDB cluster
Kubernetes e2e suite [It] [sig-apps] StatefulSet Deploy clustered applications [Feature:StatefulSet] [Slow] should creating a working mysql cluster
Kubernetes e2e suite [It] [sig-apps] StatefulSet Deploy clustered applications [Feature:StatefulSet] [Slow] should creating a working redis cluster
Kubernetes e2e suite [It] [sig-apps] StatefulSet Deploy clustered applications [Feature:StatefulSet] [Slow] should creating a working zookeeper cluster
Kubernetes e2e suite [It] [sig-apps] StatefulSet Non-retain StatefulSetPersistentVolumeClaimPolicy [Feature:StatefulSetAutoDeletePVC] should delete PVCs after adopting pod (WhenDeleted)
Kubernetes e2e suite [It] [sig-apps] StatefulSet Non-retain StatefulSetPersistentVolumeClaimPolicy [Feature:StatefulSetAutoDeletePVC] should delete PVCs after adopting pod (WhenScaled) [Feature:StatefulSetAutoDeletePVC]
Kubernetes e2e suite [It] [sig-apps] StatefulSet Non-retain StatefulSetPersistentVolumeClaimPolicy [Feature:StatefulSetAutoDeletePVC] should delete PVCs with a OnScaledown policy
Kubernetes e2e suite [It] [sig-apps] StatefulSet Non-retain StatefulSetPersistentVolumeClaimPolicy [Feature:StatefulSetAutoDeletePVC] should delete PVCs with a WhenDeleted policy
Kubernetes e2e suite [It] [sig-apps] stateful Upgrade [Feature:StatefulUpgrade] stateful upgrade should maintain a functioning cluster
Kubernetes e2e suite [It] [sig-auth] ServiceAccount admission controller migration [Feature:BoundServiceAccountTokenVolume] master upgrade should maintain a functioning cluster
Kubernetes e2e suite [It] [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow]
Kubernetes e2e suite [It] [sig-auth] [Feature:NodeAuthenticator] The kubelet can delegate ServiceAccount tokens to the API server
Kubernetes e2e suite [It] [sig-auth] [Feature:NodeAuthenticator] The kubelet's main port 10250 should reject requests with no credentials
Kubernetes e2e suite [It] [sig-auth] [Feature:NodeAuthorizer] A node shouldn't be able to create another node
Kubernetes e2e suite [It] [sig-auth] [Feature:NodeAuthorizer] A node shouldn't be able to delete another node
Kubernetes e2e suite [It] [sig-auth] [Feature:NodeAuthorizer] Getting a non-existent configmap should exit with the Forbidden error, not a NotFound error
Kubernetes e2e suite [It] [sig-auth] [Feature:NodeAuthorizer] Getting a non-existent secret should exit with the Forbidden error, not a NotFound error
Kubernetes e2e suite [It] [sig-auth] [Feature:NodeAuthorizer] Getting a secret for a workload the node has access to should succeed
Kubernetes e2e suite [It] [sig-auth] [Feature:NodeAuthorizer] Getting an existing configmap should exit with the Forbidden error
Kubernetes e2e suite [It] [sig-auth] [Feature:NodeAuthorizer] Getting an existing secret should exit with the Forbidden error
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaler scalability [Slow] CA ignores unschedulable pods while scheduling schedulable pods [Feature:ClusterAutoscalerScalability6]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaler scalability [Slow] should scale down empty nodes [Feature:ClusterAutoscalerScalability3]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaler scalability [Slow] should scale down underutilized nodes [Feature:ClusterAutoscalerScalability4]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaler scalability [Slow] should scale up at all [Feature:ClusterAutoscalerScalability1]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaler scalability [Slow] should scale up twice [Feature:ClusterAutoscalerScalability2]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaler scalability [Slow] shouldn't scale down with underutilized nodes due to host port conflicts [Feature:ClusterAutoscalerScalability5]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] Should be able to scale a node group down to 0[Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] Should be able to scale a node group up from 0[Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] Should not scale GPU pool up if pod does not require GPUs [GpuType:] [Feature:ClusterSizeAutoscalingGpu]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] Should scale down GPU pool from 1 [GpuType:] [Feature:ClusterSizeAutoscalingGpu]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] Should scale up GPU pool from 0 [GpuType:] [Feature:ClusterSizeAutoscalingGpu]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] Should scale up GPU pool from 1 [GpuType:] [Feature:ClusterSizeAutoscalingGpu]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] Shouldn't perform scale up operation and should list unhealthy status if most of the cluster is broken[Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should add node to the particular mig [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should be able to scale down by draining multiple pods one by one as dictated by pdb[Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should be able to scale down by draining system pods with pdb[Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should be able to scale down when rescheduling a pod is required and pdb allows for it[Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should correctly scale down after a node is not needed [Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should correctly scale down after a node is not needed and one node is broken [Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should correctly scale down after a node is not needed when there is non autoscaled pool[Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should disable node pool autoscaling [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should increase cluster size if pending pods are small [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should increase cluster size if pending pods are small and one node is broken [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should increase cluster size if pending pods are small and there is another node pool that is not autoscaled [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should increase cluster size if pod requesting EmptyDir volume is pending [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should increase cluster size if pod requesting volume is pending [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should increase cluster size if pods are pending due to host port conflict [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should increase cluster size if pods are pending due to pod anti-affinity [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should scale down when expendable pod is running [Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should scale up correct target pool [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should scale up when non expendable pod is created [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't be able to scale down when rescheduling a pod is required, but pdb doesn't allow drain[Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't increase cluster size if pending pod is too large [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't scale down when non expendable pod is running [Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't scale up when expendable pod is created [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't scale up when expendable pod is preempted [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't trigger additional scale-ups during processing scale-up [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] DNS horizontal autoscaling [Serial] [Slow] kube-dns-autoscaler should scale kube-dns pods when cluster size changed
Kubernetes e2e suite [It] [sig-autoscaling] DNS horizontal autoscaling kube-dns-autoscaler should scale kube-dns pods in both nonfaulty and faulty scenarios
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:ClusterSizeAutoscalingScaleUp] [Slow] Autoscaling Autoscaling a service from 1 pod and 3 nodes to 8 pods and >=4 nodes takes less than 15 minutes
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) ReplicationController light Should scale from 1 pod to 2 pods
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) ReplicationController light Should scale from 2 pods to 1 pod [Slow]
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] Deployment Should scale from 1 pod to 3 pods and from 3 to 5
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] Deployment Should scale from 5 pods to 3 pods and from 3 to 1
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicaSet Should scale from 1 pod to 3 pods and from 3 to 5
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicaSet Should scale from 5 pods to 3 pods and from 3 to 1
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicaSet with idle sidecar (ContainerResource use case) Should not scale up on a busy sidecar with an idle application
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicaSet with idle sidecar (ContainerResource use case) Should scale from 1 pod to 3 pods and from 3 to 5 on a busy application with an idle sidecar container
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicationController Should scale from 1 pod to 3 pods and from 3 to 5 and verify decision stability
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicationController Should scale from 5 pods to 3 pods and from 3 to 1 and verify decision stability
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) with autoscaling disabled shouldn't scale down
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) with autoscaling disabled shouldn't scale up
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) with long upscale stabilization window should scale up only after the stabilization period
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) with scale limited by number of Pods rate should scale down no more than given number of Pods per minute
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) with scale limited by number of Pods rate should scale up no more than given number of Pods per minute
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) with scale limited by percentage should scale down no more than given percentage of current Pods per minute
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) with scale limited by percentage should scale up no more than given percentage of current Pods per minute
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) with short downscale stabilization window should scale down soon after the stabilization period
Kubernetes e2e suite [It] [sig-autoscaling] [HPA] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) should scale down with Custom Metric of type Object from Stackdriver [Feature:CustomMetricsAutoscaling]
Kubernetes e2e suite [It] [sig-autoscaling] [HPA] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) should scale down with Custom Metric of type Pod from Stackdriver [Feature:CustomMetricsAutoscaling]
Kubernetes e2e suite [It] [sig-autoscaling] [HPA] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) should scale down with Custom Metric of type Pod from Stackdriver with Prometheus [Feature:CustomMetricsAutoscaling]
Kubernetes e2e suite [It] [sig-autoscaling] [HPA] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) should scale down with External Metric with target average value from Stackdriver [Feature:CustomMetricsAutoscaling]
Kubernetes e2e suite [It] [sig-autoscaling] [HPA] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) should scale down with External Metric with target value from Stackdriver [Feature:CustomMetricsAutoscaling]
Kubernetes e2e suite [It] [sig-autoscaling] [HPA] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) should scale up with two External metrics from Stackdriver [Feature:CustomMetricsAutoscaling]
Kubernetes e2e suite [It] [sig-autoscaling] [HPA] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) should scale up with two metrics of type Pod from Stackdriver [Feature:CustomMetricsAutoscaling]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl taint [Serial] should remove all the taints with the same key off a node
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl taint [Serial] should update the taint on a node
Kubernetes e2e suite [It] [sig-cli] Kubectl client Simple pod should return command exit codes [Slow] running a failing command with --leave-stdin-open
Kubernetes e2e suite [It] [sig-cli] Kubectl client Simple pod should return command exit codes [Slow] running a failing command without --restart=Never
Kubernetes e2e suite [It] [sig-cli] Kubectl client Simple pod should return command exit codes [Slow] running a failing command without --restart=Never, but with --rm
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Addon update should propagate add-on file changes [Slow]
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Downgrade [Feature:Downgrade] cluster downgrade should maintain a functioning cluster [Feature:ClusterDowngrade]
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] GKE node pools [Feature:GKENodePool] should create a cluster with multiple node pools [Feature:GKENodePool]
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] HA-master [Feature:HAMaster] survive addition/removal replicas different zones [Serial][Disruptive]
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] HA-master [Feature:HAMaster] survive addition/removal replicas multizone workers [Serial][Disruptive]
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] HA-master [Feature:HAMaster] survive addition/removal replicas same zone [Serial][Disruptive]
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Nodes [Disruptive] Resize [Slow] should be able to add nodes
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Nodes [Disruptive] Resize [Slow] should be able to delete nodes
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Ports Security Check [Feature:KubeletSecurity] should not be able to proxy to cadvisor port 4194 using proxy subresource
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Ports Security Check [Feature:KubeletSecurity] should not be able to proxy to the readonly kubelet port 10255 using proxy subresource
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Ports Security Check [Feature:KubeletSecurity] should not have port 10255 open on its all public IP addresses
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Ports Security Check [Feature:KubeletSecurity] should not have port 4194 open on its all public IP addresses
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] each node by dropping all inbound packets for a while and ensure they function afterwards
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] each node by dropping all outbound packets for a while and ensure they function afterwards
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] each node by ordering clean reboot and ensure they function upon restart
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] each node by ordering unclean reboot and ensure they function upon restart
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] each node by switching off the network interface and ensure they function upon switch on
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] each node by triggering kernel panic and ensure they function upon restart
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Recreate [Feature:Recreate] recreate nodes and ensure they function upon restart
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Restart [Disruptive] should restart all nodes and ensure all nodes and pods recover
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Upgrade [Feature:Upgrade] cluster upgrade should maintain a functioning cluster [Feature:ClusterUpgrade]
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Upgrade [Feature:Upgrade] master upgrade should maintain a functioning cluster [Feature:MasterUpgrade]
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] [Disruptive]NodeLease NodeLease deletion node lease should be deleted when corresponding node is deleted
Kubernetes e2e suite [It] [sig-cloud-provider] [Feature:CloudProvider][Disruptive] Nodes should be deleted on API server if it doesn't exist in the cloud provider
Kubernetes e2e suite [It] [sig-cluster-lifecycle] [Feature:BootstrapTokens] should delete the signed bootstrap tokens from clusterInfo ConfigMap when bootstrap token is deleted
Kubernetes e2e suite [It] [sig-cluster-lifecycle] [Feature:BootstrapTokens] should delete the token secret when the secret expired
Kubernetes e2e suite [It] [sig-cluster-lifecycle] [Feature:BootstrapTokens] should not delete the token secret when the secret is not expired
Kubernetes e2e suite [It] [sig-cluster-lifecycle] [Feature:BootstrapTokens] should resign the bootstrap tokens when the clusterInfo ConfigMap updated [Serial][Disruptive]
Kubernetes e2e suite [It] [sig-cluster-lifecycle] [Feature:BootstrapTokens] should sign the new added bootstrap tokens
Kubernetes e2e suite [It] [sig-instrumentation] Logging soak [Performance] [Slow] [Disruptive] should survive logging 1KB every 1s seconds, for a duration of 2m0s
Kubernetes e2e suite [It] [sig-instrumentation] Stackdriver Monitoring should have accelerator metrics [Feature:StackdriverAcceleratorMonitoring]
Kubernetes e2e suite [It] [sig-instrumentation] Stackdriver Monitoring should have cluster metrics [Feature:StackdriverMonitoring]
Kubernetes e2e suite [It] [sig-instrumentation] Stackdriver Monitoring should run Custom Metrics - Stackdriver Adapter for external metrics [Feature:StackdriverExternalMetrics]
Kubernetes e2e suite [It] [sig-instrumentation] Stackdriver Monitoring should run Custom Metrics - Stackdriver Adapter for new resource model [Feature:StackdriverCustomMetrics]
Kubernetes e2e suite [It] [sig-instrumentation] Stackdriver Monitoring should run Custom Metrics - Stackdriver Adapter for old resource model [Feature:StackdriverCustomMetrics]
Kubernetes e2e suite [It] [sig-instrumentation] Stackdriver Monitoring should run Stackdriver Metadata Agent [Feature:StackdriverMetadataAgent]
Kubernetes e2e suite [It] [sig-network] ClusterDns [Feature:Example] should create pod that uses dns
Kubernetes e2e suite [It] [sig-network] DNS configMap nameserver Change stubDomain should be able to change stubDomain configuration [Slow][Serial]
Kubernetes e2e suite [It] [sig-network] DNS configMap nameserver Forward PTR lookup should forward PTR records lookup to upstream nameserver [Slow][Serial]
Kubernetes e2e suite [It] [sig-network] DNS configMap nameserver Forward external name lookup should forward externalname lookup to upstream nameserver [Slow][Serial]
Kubernetes e2e suite [It] [sig-network] DNS should provide DNS for the cluster [Provider:GCE]
Kubernetes e2e suite [It] [sig-network] Firewall rule [Slow] [Serial] should create valid firewall rules for LoadBalancer type service
Kubernetes e2e suite [It] [sig-network] Firewall rule control plane should not expose well-known ports
Kubernetes e2e suite [It] [sig-network] Firewall rule should have correct firewall rules for e2e cluster
Kubernetes e2e suite [It] [sig-network] IngressClass [Feature:Ingress] should allow IngressClass to have Namespace-scoped parameters [Serial]
Kubernetes e2e suite [It] [sig-network] IngressClass [Feature:Ingress] should choose the one with the later CreationTimestamp, if equal the one with the lower name when two ingressClasses are marked as default[Serial]
Kubernetes e2e suite [It] [sig-network] IngressClass [Feature:Ingress] should not set default value if no default IngressClass [Serial]
Kubernetes e2e suite [It] [sig-network] IngressClass [Feature:Ingress] should set default value on new IngressClass [Serial]
Kubernetes e2e suite [It] [sig-network] LoadBalancers ESIPP [Slow] should handle updates to ExternalTrafficPolicy field
Kubernetes e2e suite [It] [sig-network] LoadBalancers ESIPP [Slow] should only target nodes with endpoints
Kubernetes e2e suite [It] [sig-network] LoadBalancers ESIPP [Slow] should work for type=LoadBalancer
Kubernetes e2e suite [It] [sig-network] LoadBalancers ESIPP [Slow] should work for type=NodePort
Kubernetes e2e suite [It] [sig-network] LoadBalancers ESIPP [Slow] should work from pods
Kubernetes e2e suite [It] [sig-network] LoadBalancers should be able to change the type and ports of a TCP service [Slow]
Kubernetes e2e suite [It] [sig-network] LoadBalancers should be able to change the type and ports of a UDP service [Slow]
Kubernetes e2e suite [It] [sig-network] LoadBalancers should be able to create LoadBalancer Service without NodePort and change it [Slow]
Kubernetes e2e suite [It] [sig-network] LoadBalancers should be able to create an internal type load balancer [Slow]
Kubernetes e2e suite [It] [sig-network] LoadBalancers should be able to switch session affinity for LoadBalancer service with ESIPP off [Slow] [DisabledForLargeClusters] [LinuxOnly]
Kubernetes e2e suite [It] [sig-network] LoadBalancers should be able to switch session affinity for LoadBalancer service with ESIPP on [Slow] [DisabledForLargeClusters] [LinuxOnly]
Kubernetes e2e suite [It] [sig-network] LoadBalancers should handle load balancer cleanup finalizer for service [Slow]
Kubernetes e2e suite [It] [sig-network] LoadBalancers should have session affinity work for LoadBalancer service with ESIPP off [Slow] [DisabledForLargeClusters] [LinuxOnly]
Kubernetes e2e suite [It] [sig-network] LoadBalancers should have session affinity work for LoadBalancer service with ESIPP on [Slow] [DisabledForLargeClusters] [LinuxOnly]
Kubernetes e2e suite [It] [sig-network] LoadBalancers should only allow access from service loadbalancer source ranges [Slow]
Kubernetes e2e suite [It] [sig-network] LoadBalancers should reconcile LB health check interval [Slow][Serial][Disruptive]
Kubernetes e2e suite [It] [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:Ingress] should conform to Ingress spec
Kubernetes e2e suite [It] [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:NEG] [Flaky] rolling update backend pods should not cause service disruption
Kubernetes e2e suite [It] [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:NEG] [Flaky] should be able to create a ClusterIP service
Kubernetes e2e suite [It] [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:NEG] [Flaky] should be able to switch between IG and NEG modes
Kubernetes e2e suite [It] [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:NEG] [Flaky] should conform to Ingress spec
Kubernetes e2e suite [It] [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:NEG] [Flaky] should create NEGs for all ports with the Ingress annotation, and NEGs for the standalone annotation otherwise
Kubernetes e2e suite [It] [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:NEG] [Flaky] should sync endpoints for both Ingress-referenced NEG and standalone NEG
Kubernetes e2e suite [It] [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:NEG] [Flaky] should sync endpoints to NEG
Kubernetes e2e suite [It] [sig-network] Loadbalancing: L7 Scalability GCE [Slow] [Serial] [Feature:IngressScale] Creating and updating ingresses should happen promptly with small/medium/large amount of ingresses
Kubernetes e2e suite [It] [sig-network] Netpol API should support creating NetworkPolicy with Status subresource [Feature:NetworkPolicyStatus]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should allow egress access on one named port [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should allow egress access to server in CIDR block [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should allow ingress access from namespace on one named port [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should allow ingress access from updated namespace [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should allow ingress access from updated pod [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should allow ingress access on one named port [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should deny egress from all pods in a namespace [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should deny egress from pods based on PodSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should deny ingress access to updated pod [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should deny ingress from pods on other namespaces [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce egress policy allowing traffic to a server in a different namespace based on PodSelector and NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce except clause while egress access to server in CIDR block [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce ingress policy allowing any port traffic to a server on a specific protocol [Feature:NetworkPolicy] [Feature:UDP]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce multiple egress policies with egress allow-all policy taking precedence [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce multiple ingress policies with ingress allow-all policy taking precedence [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce multiple, stacked policies with overlapping podSelectors [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce policies to check ingress and egress policies can be controlled independently based on PodSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce policy based on Multiple PodSelectors and NamespaceSelectors [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce policy based on NamespaceSelector with MatchExpressions using default ns label [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce policy based on NamespaceSelector with MatchExpressions[Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce policy based on PodSelector and NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce policy based on PodSelector or NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce policy based on PodSelector with MatchExpressions[Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce policy based on Ports [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce policy based on any PodSelectors [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce policy to allow ingress traffic for a target [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce policy to allow ingress traffic from pods in all namespaces [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce policy to allow traffic based on NamespaceSelector with MatchLabels using default ns label [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce policy to allow traffic from pods within server namespace based on PodSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce policy to allow traffic only from a different namespace, based on NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce policy to allow traffic only from a pod in a different namespace based on PodSelector and NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce updated policy [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should ensure an IP overlapping both IPBlock.CIDR and IPBlock.Except is allowed [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should not allow access by TCP when a policy specifies only UDP [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should not mistakenly treat 'protocol: SCTP' as 'protocol: TCP', even if the plugin doesn't support SCTP [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should properly isolate pods that are selected by a policy allowing SCTP, even if the plugin doesn't support SCTP [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should stop enforcing policies after they are deleted [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should support a 'default-deny-all' policy [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should support a 'default-deny-ingress' policy [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should support allow-all policy [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should support denying of egress traffic on the client side (even if the server explicitly allows this traffic) [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should work with Ingress, Egress specified together [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol [Feature:SCTPConnectivity][LinuxOnly][Disruptive] NetworkPolicy between server and client using SCTP should enforce policy based on Ports [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol [Feature:SCTPConnectivity][LinuxOnly][Disruptive] NetworkPolicy between server and client using SCTP should enforce policy to allow traffic only from a pod in a different namespace based on PodSelector and NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol [Feature:SCTPConnectivity][LinuxOnly][Disruptive] NetworkPolicy between server and client using SCTP should support a 'default-deny-ingress' policy [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol [LinuxOnly] NetworkPolicy between server and client using UDP should enforce policy based on Ports [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol [LinuxOnly] NetworkPolicy between server and client using UDP should enforce policy to allow traffic only from a pod in a different namespace based on PodSelector and NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol [LinuxOnly] NetworkPolicy between server and client using UDP should support a 'default-deny-ingress' policy [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicy [Feature:SCTPConnectivity][LinuxOnly][Disruptive] NetworkPolicy between server and client using SCTP should enforce policy based on Ports [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicy [Feature:SCTPConnectivity][LinuxOnly][Disruptive] NetworkPolicy between server and client using SCTP should enforce policy to allow traffic only from a pod in a different namespace based on PodSelector and NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicy [Feature:SCTPConnectivity][LinuxOnly][Disruptive] NetworkPolicy between server and client using SCTP should support a 'default-deny' policy [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should allow egress access on one named port [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should allow egress access to server in CIDR block [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should allow ingress access from namespace on one named port [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should allow ingress access from updated namespace [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should allow ingress access from updated pod [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should allow ingress access on one named port [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should deny ingress access to updated pod [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce egress policy allowing traffic to a server in a different namespace based on PodSelector and NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce except clause while egress access to server in CIDR block [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce multiple egress policies with egress allow-all policy taking precedence [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce multiple ingress policies with ingress allow-all policy taking precedence [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce multiple, stacked policies with overlapping podSelectors [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policies to check ingress and egress policies can be controlled independently based on PodSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy based on NamespaceSelector with MatchExpressions[Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy based on PodSelector and NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy based on PodSelector or NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy based on PodSelector with MatchExpressions[Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy based on Ports [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy to allow traffic from pods within server namespace based on PodSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy to allow traffic only from a different namespace, based on NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy to allow traffic only from a pod in a different namespace based on PodSelector and NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce updated policy [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should ensure an IP overlapping both IPBlock.CIDR and IPBlock.Except is allowed [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should not allow access by TCP when a policy specifies only SCTP [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should stop enforcing policies after they are deleted [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should support a 'default-deny-all' policy [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should support a 'default-deny-ingress' policy [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should support allow-all policy [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should work with Ingress,Egress specified together [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: sctp [LinuxOnly][Feature:SCTPConnectivity][Disruptive]
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Pods should function for node-pod communication: sctp [LinuxOnly][Feature:SCTPConnectivity][Disruptive]
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should be able to handle large requests: http
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should be able to handle large requests: udp
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should function for client IP based session affinity: http [LinuxOnly]
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should function for client IP based session affinity: udp [LinuxOnly]
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should function for endpoint-Service: http
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should function for endpoint-Service: sctp [Feature:SCTPConnectivity][Disruptive]
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should function for multiple endpoint-Services with same selector
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should function for node-Service: http
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should function for node-Service: sctp [Feature:SCTPConnectivity][Disruptive]
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should function for node-Service: udp
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should function for pod-Service: http
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should function for pod-Service: sctp [Feature:SCTPConnectivity][Disruptive]
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should function for pod-Service: udp
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should function for service endpoints using hostNetwork
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should support basic nodePort: udp functionality
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should update endpoints: http
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should update endpoints: udp
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should update nodePort: http [Slow]
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should update nodePort: udp [Slow]
Kubernetes e2e suite [It] [sig-network] Networking IPerf2 [Feature:Networking-Performance] should run iperf2
Kubernetes e2e suite [It] [sig-network] Networking should check kube-proxy urls
Kubernetes e2e suite [It] [sig-network] Networking should provide Internet connection for containers [Feature:Networking-IPv4]
Kubernetes e2e suite [It] [sig-network] Networking should provide Internet connection for containers [Feature:Networking-IPv6][Experimental][LinuxOnly]
Kubernetes e2e suite [It] [sig-network] Networking should provider Internet connection for containers using DNS [Feature:Networking-DNS]
Kubernetes e2e suite [It] [sig-network] Networking should recreate its iptables rules if they are deleted [Disruptive]
Kubernetes e2e suite [It] [sig-network] NoSNAT [Feature:NoSNAT] [Slow] Should be able to send traffic between Pods without SNAT
Kubernetes e2e suite [It] [sig-network] Services GCE [Slow] should be able to create and tear down a standard-tier load balancer [Slow]
Kubernetes e2e suite [It] [sig-network] Services should fail health check node port if there are only terminating endpoints [Feature:ProxyTerminatingEndpoints]
Kubernetes e2e suite [It] [sig-network] Services should fallback to local terminating endpoints when there are no ready endpoints with externalTrafficPolicy=Local [Feature:ProxyTerminatingEndpoints]
Kubernetes e2e suite [It] [sig-network] Services should fallback to local terminating endpoints when there are no ready endpoints with internalTrafficPolicy=Local [Feature:ProxyTerminatingEndpoints]
Kubernetes e2e suite [It] [sig-network] Services should fallback to terminating endpoints when there are no ready endpoints with externallTrafficPolicy=Cluster [Feature:ProxyTerminatingEndpoints]
Kubernetes e2e suite [It] [sig-network] Services should fallback to terminating endpoints when there are no ready endpoints with internalTrafficPolicy=Cluster [Feature:ProxyTerminatingEndpoints]
Kubernetes e2e suite [It] [sig-network] Services should respect internalTrafficPolicy=Local Pod (hostNetwork: true) to Pod [Feature:ServiceInternalTrafficPolicy]
Kubernetes e2e suite [It] [sig-network] Services should respect internalTrafficPolicy=Local Pod and Node, to Pod (hostNetwork: true) [Feature:ServiceInternalTrafficPolicy]
Kubernetes e2e suite [It] [sig-network] Services should respect internalTrafficPolicy=Local Pod to Pod [Feature:ServiceInternalTrafficPolicy]
Kubernetes e2e suite [It] [sig-network] Services should work after restarting apiserver [Disruptive]
Kubernetes e2e suite [It] [sig-network] Services should work after restarting kube-proxy [Disruptive]
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should be able to handle large requests: http
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should be able to handle large requests: udp
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should function for client IP based session affinity: http [LinuxOnly]
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should function for client IP based session affinity: udp [LinuxOnly]
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should function for endpoint-Service: http
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should function for endpoint-Service: udp
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should function for node-Service: http
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should function for node-Service: udp
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should function for pod-Service: http
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should function for pod-Service: sctp [Feature:SCTPConnectivity][Disruptive]
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should function for pod-Service: udp
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should function for service endpoints using hostNetwork
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should update endpoints: http
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should update endpoints: udp
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] should be able to reach pod on ipv4 and ipv6 ip
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] should create a single stack service with cluster ip from primary service range
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] should create pod, add ipv6 and ipv4 ip to pod ips
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] should create service with ipv4 cluster ip
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] should create service with ipv4,v6 cluster ip
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] should create service with ipv6 cluster ip
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] should create service with ipv6,v4 cluster ip
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] should have ipv4 and ipv6 internal node ip
Kubernetes e2e suite [It] [sig-network] [Feature:PerformanceDNS][Serial] Should answer DNS query for maximum number of services per cluster
Kubernetes e2e suite [It] [sig-network] [Feature:Topology Hints] should distribute endpoints evenly
Kubernetes e2e suite [It] [sig-network] kube-proxy migration [Feature:KubeProxyDaemonSetMigration] Downgrade kube-proxy from a DaemonSet to static pods should maintain a functioning cluster [Feature:KubeProxyDaemonSetDowngrade]
Kubernetes e2e suite [It] [sig-network] kube-proxy migration [Feature:KubeProxyDaemonSetMigration] Upgrade kube-proxy from static pods to a DaemonSet should maintain a functioning cluster [Feature:KubeProxyDaemonSetUpgrade]
Kubernetes e2e suite [It] [sig-node] AppArmor load AppArmor profiles can disable an AppArmor profile, using unconfined
Kubernetes e2e suite [It] [sig-node] AppArmor load AppArmor profiles should enforce an AppArmor profile
Kubernetes e2e suite [It] [sig-node] Downward API [Serial] [Disruptive] [NodeFeature:DownwardAPIHugePages] Downward API tests for hugepages should provide container's limits.hugepages-<pagesize> and requests.hugepages-<pagesize> as env vars
Kubernetes e2e suite [It] [sig-node] Downward API [Serial] [Disruptive] [NodeFeature:DownwardAPIHugePages] Downward API tests for hugepages should provide default limits.hugepages-<pagesize> from node allocatable
Kubernetes e2e suite [It] [sig-node] Kubelet [Serial] [Slow] experimental resource usage tracking [Feature:ExperimentalResourceUsageTracking] resource tracking for 100 pods per node
Kubernetes e2e suite [It] [sig-node] Kubelet [Serial] [Slow] regular resource usage tracking [Feature:RegularResourceUsageTracking] resource tracking for 0 pods per node
Kubernetes e2e suite [It] [sig-node] Kubelet [Serial] [Slow] regular resource usage tracking [Feature:RegularResourceUsageTracking] resource tracking for 100 pods per node
Kubernetes e2e suite [It] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]
Kubernetes e2e suite [It] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] only evicts pods without tolerations from tainted nodes
Kubernetes e2e suite [It] [sig-node] NoExecuteTaintManager Single Pod [Serial] doesn't evict pod with tolerations from tainted nodes
Kubernetes e2e suite [It] [sig-node] NoExecuteTaintManager Single Pod [Serial] eventually evict pod with finite tolerations from tainted nodes
Kubernetes e2e suite [It] [sig-node] NoExecuteTaintManager Single Pod [Serial] evicts pods from tainted nodes
Kubernetes e2e suite [It] [sig-node] NoExecuteTaintManager Single Pod [Serial] removing taint cancels eviction [Disruptive] [Conformance]
Kubernetes e2e suite [It] [sig-node] NodeProblemDetector should run without error
Kubernetes e2e suite [It] [sig-node] Pod garbage collector [Feature:PodGarbageCollector] [Slow] should handle the creation of 1000 pods
Kubernetes e2e suite [It] [sig-node] Pods should cap back-off at MaxContainerBackOff [Slow][NodeConformance]
Kubernetes e2e suite [It] [sig-node] Pods should have their auto-restart back-off timer reset on image update [Slow][NodeConformance]
Kubernetes e2e suite [It] [sig-node] Probing container should override timeoutGracePeriodSeconds when LivenessProbe field is set [Feature:ProbeTerminationGracePeriod]
Kubernetes e2e suite [It] [sig-node] Probing container should override timeoutGracePeriodSeconds when StartupProbe field is set [Feature:ProbeTerminationGracePeriod]
Kubernetes e2e suite [It] [sig-node] RuntimeClass should run a Pod requesting a RuntimeClass with a configured handler [NodeFeature:RuntimeHandler]
Kubernetes e2e suite [It] [sig-node] RuntimeClass should run a Pod requesting a RuntimeClass with scheduling with taints [Serial]
Kubernetes e2e suite [It] [sig-node] RuntimeClass should run a Pod requesting a RuntimeClass with scheduling without taints
Kubernetes e2e suite [It] [sig-node] Security Context When creating a pod with HostUsers must create the user namespace if set to false [LinuxOnly] [Feature:UserNamespacesStatelessPodsSupport]
Kubernetes e2e suite [It] [sig-node] Security Context When creating a pod with HostUsers must not create the user namespace if set to true [LinuxOnly] [Feature:UserNamespacesStatelessPodsSupport]
Kubernetes e2e suite [It] [sig-node] Security Context When creating a pod with HostUsers should mount all volumes with proper permissions with hostUsers=false [LinuxOnly] [Feature:UserNamespacesStatelessPodsSupport]
Kubernetes e2e suite [It] [sig-node] Security Context When creating a pod with HostUsers should set FSGroup to user inside the container with hostUsers=false [LinuxOnly] [Feature:UserNamespacesStatelessPodsSupport]
Kubernetes e2e suite [It] [sig-node] Security Context should support volume SELinux relabeling [Flaky] [LinuxOnly]
Kubernetes e2e suite [It] [sig-node] Security Context should support volume SELinux relabeling when using hostIPC [Flaky] [LinuxOnly]
Kubernetes e2e suite [It] [sig-node] Security Context should support volume SELinux relabeling when using hostPID [Flaky] [LinuxOnly]
Kubernetes e2e suite [It] [sig-node] Variable Expansion should fail substituting values in a volume subpath with absolute path [Slow] [Conformance]
Kubernetes e2e suite [It] [sig-node] Variable Expansion should fail substituting values in a volume subpath with backticks [Slow] [Conformance]
Kubernetes e2e suite [It] [sig-node] Variable Expansion should succeed in writing subpaths in container [Slow] [Conformance]
Kubernetes e2e suite [It] [sig-node] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance]
Kubernetes e2e suite [It] [sig-node] [Feature:Example] Downward API should create a pod that prints his name and namespace
Kubernetes e2e suite [It] [sig-node] [Feature:Example] Liveness liveness pods should be automatically restarted
Kubernetes e2e suite [It] [sig-node] [Feature:Example] Secret should create a pod that reads a secret
Kubernetes e2e suite [It] [sig-node] crictl should be able to run crictl on the node
Kubernetes e2e suite [It] [sig-node] gpu Upgrade [Feature:GPUUpgrade] cluster downgrade should be able to run gpu pod after downgrade [Feature:GPUClusterDowngrade]
Kubernetes e2e suite [It] [sig-node] gpu Upgrade [Feature:GPUUpgrade] cluster upgrade should be able to run gpu pod after upgrade [Feature:GPUClusterUpgrade]
Kubernetes e2e suite [It] [sig-node] gpu Upgrade [Feature:GPUUpgrade] master upgrade should NOT disrupt gpu pod [Feature:GPUMasterUpgrade]
Kubernetes e2e suite [It] [sig-node] kubelet host cleanup with volume mounts [HostCleanup][Flaky] Host cleanup after disrupting NFS volume [NFS] after stopping the nfs-server and deleting the (active) client pod, the NFS mount and the pod's UID directory should be removed.
Kubernetes e2e suite [It] [sig-node] kubelet host cleanup with volume mounts [HostCleanup][Flaky] Host cleanup after disrupting NFS volume [NFS] after stopping the nfs-server and deleting the (sleeping) client pod, the NFS mount and the pod's UID directory should be removed.
Kubernetes e2e suite [It] [sig-scheduling] GPUDevicePluginAcrossRecreate [Feature:Recreate] run Nvidia GPU Device Plugin tests with a recreation
Kubernetes e2e suite [It] [sig-scheduling] Multi-AZ Clusters should spread the pods of a replication controller across zones [Serial]
Kubernetes e2e suite [It] [sig-scheduling] Multi-AZ Clusters should spread the pods of a service across zones [Serial]
Kubernetes e2e suite [It] [sig-scheduling] SchedulerPredicates [Serial] PodTopologySpread Filtering validates 4 pods with MaxSkew=1 are evenly distributed into 2 nodes
Kubernetes e2e suite [It] [sig-scheduling] SchedulerPredicates [Serial] validates local ephemeral storage resource limits of pods that are allowed to run [Feature:LocalStorageCapacityIsolation]
Kubernetes e2e suite [It] [sig-scheduling] SchedulerPredicates [Serial] validates pod overhead is considered along with resource limits of pods that are allowed to run verify pod overhead is accounted for
Kubernetes e2e suite [It] [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]
Kubernetes e2e suite [It] [sig-scheduling] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching
Kubernetes e2e suite [It] [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]
Kubernetes e2e suite [It] [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]
Kubernetes e2e suite [It] [sig-scheduling] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching
Kubernetes e2e suite [It] [sig-scheduling] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching
Kubernetes e2e suite [It] [sig-scheduling] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching
Kubernetes e2e suite [It] [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
Kubernetes e2e suite [It] [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol
Kubernetes e2e suite [It] [sig-scheduling] SchedulerPreemption [Serial] PodTopologySpread Preemption validates proper pods are preempted
Kubernetes e2e suite [It] [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance]
Kubernetes e2e suite [It] [sig-scheduling] SchedulerPreemption [Serial] PriorityClass endpoints verify PriorityClass endpoints can be operated with different HTTP methods [Conformance]
Kubernetes e2e suite [It] [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance]
Kubernetes e2e suite [It] [sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]
Kubernetes e2e suite [It] [sig-scheduling] SchedulerPriorities [Serial] Pod should be preferably scheduled to nodes pod can tolerate
Kubernetes e2e suite [It] [sig-scheduling] SchedulerPriorities [Serial] Pod should be scheduled to node that don't match the PodAntiAffinity terms
Kubernetes e2e suite [It] [sig-scheduling] SchedulerPriorities [Serial] PodTopologySpread Scoring validates pod should be preferably scheduled to node which makes the matching pods more evenly distributed
Kubernetes e2e suite [It] [sig-scheduling] [Feature:GPUDevicePlugin] run Nvidia GPU Device Plugin tests
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should support expansion of pvcs created for ephemeral pvcs
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] provisioning should mount multiple PV pointing to the same storage on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with any volume data source [Serial]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with mount options
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with pvc data source in parallel [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] volume-stress multiple pods should access different volumes repeatedly [Slow] [Serial]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] capacity provides storage capacity information
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with different fsgroup applied to the volume contents
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with same fsgroup applied to the volume contents
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with different fsgroup applied to the volume contents
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with same fsgroup skips ownership changes to the volume contents
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with any volume data source [Serial]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with mount options
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source in parallel [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] volume-stress multiple pods should access different volumes repeatedly [Slow] [Serial]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (delayed binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (delayed binding)] topology should provision a volume and schedule a pod with AllowedTopologies
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ext3)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ext4)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] volume-lifecycle-performance should provision volumes at scale within performance constraints [Slow] [Serial]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits should support volume limits [Serial]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (immediate binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (immediate binding)] topology should provision a volume and schedule a pod with AllowedTopologies
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should mount multiple PV pointing to the same storage on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with any volume data source [Serial]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with mount options
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with pvc data source
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with pvc data source in parallel [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support existing directory
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support non-existent path
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (xfs)][Slow] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic Snapshot (delete policy)] snapshottable-stress[Feature:VolumeSnapshotDataSource] should support snapshotting of many volumes repeatedly [Slow] [Serial]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works, check deletion (ephemeral)
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic Snapshot (retain policy)] snapshottable-stress[Feature:VolumeSnapshotDataSource] should support snapshotting of many volumes repeatedly [Slow] [Serial]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works, check deletion (ephemeral)
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Ephemeral Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Ephemeral Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works, check deletion (ephemeral)
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Ephemeral Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Ephemeral Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works, check deletion (ephemeral)
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should create read-only inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should support multiple inline ephemeral volumes
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs)] volumeLimits should support volume limits [Serial]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should support existing directory
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (ext3)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (ext4)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (xfs)][Slow] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (ext3)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (ext4)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (xfs)][Slow] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works, check deletion (ephemeral)
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works, check deletion (ephemeral)
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should create read-only inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should create read/write inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should support expansion of pvcs created for ephemeral pvcs
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should support multiple inline ephemeral volumes
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should support two pods which have the same volume definition
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] provisioning should mount multiple PV pointing to the same storage on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with any volume data source [Serial]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with mount options
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with pvc data source
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with pvc data source in parallel [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] volume-stress multiple pods should access different volumes repeatedly [Slow] [Serial]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] capacity provides storage capacity information
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with different fsgroup applied to the volume contents
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with same fsgroup applied to the volume contents
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with different fsgroup applied to the volume contents
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with same fsgroup skips ownership changes to the volume contents
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] provisioning should mount multiple PV pointing to the same storage on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with any volume data source [Serial]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with mount options
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source in parallel [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] volume-stress multiple pods should access different volumes repeatedly [Slow] [Serial]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (delayed binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (delayed binding)] topology should provision a volume and schedule a pod with AllowedTopologies
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ext3)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ext4)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (filesystem volmode)] volume-lifecycle-performance should provision volumes at scale within performance constraints [Slow] [Serial]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits should support volume limits [Serial]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits should verify that all csinodes have volume limits
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (immediate binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (immediate binding)] topology should provision a volume and schedule a pod with AllowedTopologies
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should mount multiple PV pointing to the same storage on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with any volume data source [Serial]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with mount options
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with pvc data source
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with pvc data source in parallel [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support existing directory
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support non-existent path
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (xfs)][Slow] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic Snapshot (delete policy)] snapshottable-stress[Feature:VolumeSnapshotDataSource] should support snapshotting of many volumes repeatedly [Slow] [Serial]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works, check deletion (ephemeral)
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic Snapshot (retain policy)] snapshottable-stress[Feature:VolumeSnapshotDataSource] should support snapshotting of many volumes repeatedly [Slow] [Serial]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works, check deletion (ephemeral)
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Ephemeral Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Ephemeral Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works, check deletion (ephemeral)
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Ephemeral Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Ephemeral Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works, check deletion (ephemeral)
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should create read-only inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should create read/write inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should support expansion of pvcs created for ephemeral pvcs
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should support multiple inline ephemeral volumes
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should support two pods which have the same volume definition
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should create read-only inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should create read/write inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should support expansion of pvcs created for ephemeral pvcs
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should support multiple inline ephemeral volumes
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should support two pods which have the same volume definition
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should create read-only inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should create read/write inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support expansion of pvcs created for ephemeral pvcs
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support multiple inline ephemeral volumes
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support two pods which have the same volume definition
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Generic Ephemeral-volume (default fs)] volumeLimits should support volume limits [Serial]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Generic Ephemeral-volume (default fs)] volumeLimits should verify that all csinodes have volume limits
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] subPath should support existing directory
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (ext3)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (ext4)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (xfs)][Slow] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (block volmode)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (ext3)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (ext4)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (xfs)][Slow] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works, check deletion (ephemeral)
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works, check deletion (ephemeral)
Kubernetes e2e suite [It] [sig-storage] CSI mock volume CSI CSIDriver deployment after pod creation using non-attachable mock driver should bringup pod after deploying CSIDriver attach=false [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI mock volume CSI NodeStage error cases [Slow] should call NodeUnstage after NodeStage ephemeral error
Kubernetes e2e suite [It] [sig-storage] CSI mock volume CSI NodeStage error cases [Slow] should call NodeUnstage after NodeStage success
Kubernetes e2e suite [It] [sig-storage] CSI mock volume CSI NodeStage error cases [Slow] should not call NodeUnstage after NodeStage final error
Kubernetes e2e suite [It] [sig-storage] CSI mock volume CSI NodeStage error cases [Slow] should retry NodeStage after NodeStage ephemeral error
Kubernetes e2e suite [It] [sig-storage] CSI mock volume CSI NodeStage error cases [Slow] should retry NodeStage after NodeStage final error
Kubernetes e2e suite [It] [sig-storage] CSI mock volume CSI NodeUnstage error cases [Slow] should call NodeStage after NodeUnstage success
Kubernetes e2e suite [It] [sig-storage] CSI mock volume CSI NodeUnstage error cases [Slow] two pods: should call NodeStage after previous NodeUnstage final error
Kubernetes e2e suite [It] [sig-storage] CSI mock volume CSI NodeUnstage error cases [Slow] two pods: should call NodeStage after previous NodeUnstage transient error
Kubernetes e2e suite [It] [sig-storage] CSI mock volume CSI Snapshot Controller metrics [Feature:VolumeSnapshotDataSource] snapshot controller should emit dynamic CreateSnapshot, CreateSnapshotAndReady, and DeleteSnapshot metrics
Kubernetes e2e suite [It] [sig-storage] CSI mock volume CSI Snapshot Controller metrics [Feature:VolumeSnapshotDataSource] snapshot controller should emit pre-provisioned CreateSnapshot, CreateSnapshotAndReady, and DeleteSnapshot metrics
Kubernetes e2e suite [It] [sig-storage] CSI mock volume CSI Volume Snapshots [Feature:VolumeSnapshotDataSource] volumesnapshotcontent and pvc in Bound state with deletion timestamp set should not get deleted while snapshot finalizer exists
Kubernetes e2e suite [It] [sig-storage] CSI mock volume CSI Volume Snapshots secrets [Feature:VolumeSnapshotDataSource] volume snapshot create/delete with secrets
Kubernetes e2e suite [It] [sig-storage] CSI mock volume CSI volume limit information using mock driver should report attach limit for generic ephemeral volume when persistent volume is attached [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI mock volume CSI volume limit information using mock driver should report attach limit for persistent volume when generic ephemeral volume is attached [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI mock volume CSI volume limit information using mock driver should report attach limit when limit is bigger than 0 [Slow]
Kubernetes e2e suite [It] [sig-storage] ConfigMap Should fail non-optional pod creation due to configMap object does not exist [Slow]
Kubernetes e2e suite [It] [sig-storage] ConfigMap Should fail non-optional pod creation due to the key in the configMap object does not exist [Slow]
Kubernetes e2e suite [It] [sig-storage] Downward API [Serial] [Disruptive] [Feature:EphemeralStorage] Downward API tests for local ephemeral storage should provide container's limits.ephemeral-storage and requests.ephemeral-storage as env vars
Kubernetes e2e suite [It] [sig-storage] Downward API [Serial] [Disruptive] [Feature:EphemeralStorage] Downward API tests for local ephemeral storage should provide default limits.ephemeral-storage from node allocatable
Kubernetes e2e suite [It] [sig-storage] Dynamic Provisioning DynamicProvisioner Default should be disabled by changing the default annotation [Serial] [Disruptive]
Kubernetes e2e suite [It] [sig-storage] Dynamic Provisioning DynamicProvisioner Default should be disabled by removing the default annotation [Serial] [Disruptive]
Kubernetes e2e suite [It] [sig-storage] Dynamic Provisioning DynamicProvisioner Default should create and delete default persistent volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] Dynamic Provisioning DynamicProvisioner External should let an external dynamic provisioner create and delete persistent volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] Dynamic Provisioning DynamicProvisioner [Slow] [Feature:StorageProvider] deletion should be idempotent
Kubernetes e2e suite [It] [sig-storage] Dynamic Provisioning DynamicProvisioner [Slow] [Feature:StorageProvider] should provision storage with different parameters
Kubernetes e2e suite [It] [sig-storage] Dynamic Provisioning DynamicProvisioner [Slow] [Feature:StorageProvider] should provision storage with non-default reclaim policy Retain
Kubernetes e2e suite [It] [sig-storage] Dynamic Provisioning DynamicProvisioner [Slow] [Feature:StorageProvider] should test that deleting a claim before the volume is provisioned deletes the volume.
Kubernetes e2e suite [It] [sig-storage] Dynamic Provisioning GlusterDynamicProvisioner should create and delete persistent volumes [fast]
Kubernetes e2e suite [It] [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]
Kubernetes e2e suite [It] [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for git_repo [Serial] [Slow]
Kubernetes e2e suite [It] [sig-storage] Flexvolumes should be mountable when attachable [Feature:Flexvolumes]
Kubernetes e2e suite [It] [sig-storage] Flexvolumes should be mountable when non-attachable
Kubernetes e2e suite [It] [sig-storage] GKE local SSD [Feature:GKELocalSSD] should write and read from node local SSD [Feature:GKELocalSSD]
Kubernetes e2e suite [It] [sig-storage] GenericPersistentVolume[Disruptive] When kubelet restarts Should test that a file written to the mount before kubelet restart is readable after restart.
Kubernetes e2e suite [It] [sig-storage] GenericPersistentVolume[Disruptive] When kubelet restarts Should test that a volume mounted to a pod that is deleted while the kubelet is down unmounts when the kubelet returns.
Kubernetes e2e suite [It] [sig-storage] GenericPersistentVolume[Disruptive] When kubelet restarts Should test that a volume mounted to a pod that is force deleted while the kubelet is down unmounts when the kubelet returns.
Kubernetes e2e suite [It] [sig-storage] HostPathType Block Device [Slow] Should be able to mount block device 'ablkdev' successfully when HostPathType is HostPathBlockDev
Kubernetes e2e suite [It] [sig-storage] HostPathType Block Device [Slow] Should be able to mount block device 'ablkdev' successfully when HostPathType is HostPathUnset
Kubernetes e2e suite [It] [sig-storage] HostPathType Block Device [Slow] Should fail on mounting block device 'ablkdev' when HostPathType is HostPathCharDev
Kubernetes e2e suite [It] [sig-storage] HostPathType Block Device [Slow] Should fail on mounting block device 'ablkdev' when HostPathType is HostPathDirectory
Kubernetes e2e suite [It] [sig-storage] HostPathType Block Device [Slow] Should fail on mounting block device 'ablkdev' when HostPathType is HostPathFile
Kubernetes e2e suite [It] [sig-storage] HostPathType Block Device [Slow] Should fail on mounting block device 'ablkdev' when HostPathType is HostPathSocket
Kubernetes e2e suite [It] [sig-storage] HostPathType Block Device [Slow] Should fail on mounting non-existent block device 'does-not-exist-blk-dev' when HostPathType is HostPathBlockDev
Kubernetes e2e suite [It] [sig-storage] HostPathType Character Device [Slow] Should be able to mount character device 'achardev' successfully when HostPathType is HostPathCharDev
Kubernetes e2e suite [It] [sig-storage] HostPathType Character Device [Slow] Should be able to mount character device 'achardev' successfully when HostPathType is HostPathUnset
Kubernetes e2e suite [It] [sig-storage] HostPathType Character Device [Slow] Should fail on mounting character device 'achardev' when HostPathType is HostPathBlockDev
Kubernetes e2e suite [It] [sig-storage] HostPathType Character Device [Slow] Should fail on mounting character device 'achardev' when HostPathType is HostPathDirectory
Kubernetes e2e suite [It] [sig-storage] HostPathType Character Device [Slow] Should fail on mounting character device 'achardev' when HostPathType is HostPathFile
Kubernetes e2e suite [It] [sig-storage] HostPathType Character Device [Slow] Should fail on mounting character device 'achardev' when HostPathType is HostPathSocket
Kubernetes e2e suite [It] [sig-storage] HostPathType Character Device [Slow] Should fail on mounting non-existent character device 'does-not-exist-char-dev' when HostPathType is HostPathCharDev
Kubernetes e2e suite [It] [sig-storage] HostPathType Directory [Slow] Should be able to mount directory 'adir' successfully when HostPathType is HostPathDirectory
Kubernetes e2e suite [It] [sig-storage] HostPathType Directory [Slow] Should be able to mount directory 'adir' successfully when HostPathType is HostPathUnset
Kubernetes e2e suite [It] [sig-storage] HostPathType Directory [Slow] Should fail on mounting directory 'adir' when HostPathType is HostPathBlockDev
Kubernetes e2e suite [It] [sig-storage] HostPathType Directory [Slow] Should fail on mounting directory 'adir' when HostPathType is HostPathCharDev
Kubernetes e2e suite [It] [sig-storage] HostPathType Directory [Slow] Should fail on mounting directory 'adir' when HostPathType is HostPathFile
Kubernetes e2e suite [It] [sig-storage] HostPathType Directory [Slow] Should fail on mounting directory 'adir' when HostPathType is HostPathSocket
Kubernetes e2e suite [It] [sig-storage] HostPathType Directory [Slow] Should fail on mounting non-existent directory 'does-not-exist-dir' when HostPathType is HostPathDirectory
Kubernetes e2e suite [It] [sig-storage] HostPathType File [Slow] Should be able to mount file 'afile' successfully when HostPathType is HostPathFile
Kubernetes e2e suite [It] [sig-storage] HostPathType File [Slow] Should be able to mount file 'afile' successfully when HostPathType is HostPathUnset
Kubernetes e2e suite [It] [sig-storage] HostPathType File [Slow] Should fail on mounting file 'afile' when HostPathType is HostPathBlockDev
Kubernetes e2e suite [It] [sig-storage] HostPathType File [Slow] Should fail on mounting file 'afile' when HostPathType is HostPathCharDev
Kubernetes e2e suite [It] [sig-storage] HostPathType File [Slow] Should fail on mounting file 'afile' when HostPathType is HostPathDirectory
Kubernetes e2e suite [It] [sig-storage] HostPathType File [Slow] Should fail on mounting file 'afile' when HostPathType is HostPathSocket
Kubernetes e2e suite [It] [sig-storage] HostPathType File [Slow] Should fail on mounting non-existent file 'does-not-exist-file' when HostPathType is HostPathFile
Kubernetes e2e suite [It] [sig-storage] HostPathType Socket [Slow] Should be able to mount socket 'asocket' successfully when HostPathType is HostPathSocket
Kubernetes e2e suite [It] [sig-storage] HostPathType Socket [Slow] Should be able to mount socket 'asocket' successfully when HostPathType is HostPathUnset
Kubernetes e2e suite [It] [sig-storage] HostPathType Socket [Slow] Should fail on mounting non-existent socket 'does-not-exist-socket' when HostPathType is HostPathSocket
Kubernetes e2e suite [It] [sig-storage] HostPathType Socket [Slow] Should fail on mounting socket 'asocket' when HostPathType is HostPathBlockDev
Kubernetes e2e suite [It] [sig-storage] HostPathType Socket [Slow] Should fail on mounting socket 'asocket' when HostPathType is HostPathCharDev
Kubernetes e2e suite [It] [sig-storage] HostPathType Socket [Slow] Should fail on mounting socket 'asocket' when HostPathType is HostPathDirectory
Kubernetes e2e suite [It] [sig-storage] HostPathType Socket [Slow] Should fail on mounting socket 'asocket' when HostPathType is HostPathFile
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] provisioning should mount multiple PV pointing to the same storage on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with any volume data source [Serial]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with mount options
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with pvc data source
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with pvc data source in parallel [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] volume-stress multiple pods should access different volumes repeatedly [Slow] [Serial]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] capacity provides storage capacity information
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with different fsgroup applied to the volume contents
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with same fsgroup applied to the volume contents
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with different fsgroup applied to the volume contents
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with same fsgroup skips ownership changes to the volume contents
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] provisioning should mount multiple PV pointing to the same storage on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with any volume data source [Serial]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with mount options
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source in parallel [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] volume-stress multiple pods should access different volumes repeatedly [Slow] [Serial]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (delayed binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (delayed binding)] topology should provision a volume and schedule a pod with AllowedTopologies
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ext3)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ext4)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits should support volume limits [Serial]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits should verify that all csinodes have volume limits
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (immediate binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (immediate binding)] topology should provision a volume and schedule a pod with AllowedTopologies
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should mount multiple PV pointing to the same storage on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with any volume data source [Serial]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with mount options
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with pvc data source
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with pvc data source in parallel [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support existing directory
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support non-existent path
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (xfs)][Slow] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should create read-only inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should create read/write inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should support expansion of pvcs created for ephemeral pvcs
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should support multiple inline ephemeral volumes
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should support two pods which have the same volume definition
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should create read-only inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should create read/write inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should support expansion of pvcs created for ephemeral pvcs
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should support multiple inline ephemeral volumes
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should support two pods which have the same volume definition
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should create read-only inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should create read/write inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support expansion of pvcs created for ephemeral pvcs
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support multiple inline ephemeral volumes
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support two pods which have the same volume definition
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Generic Ephemeral-volume (default fs)] volumeLimits should support volume limits [Serial]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Generic Ephemeral-volume (default fs)] volumeLimits should verify that all csinodes have volume limits
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should support existing directory
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (ext3)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (ext4)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (xfs)][Slow] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (ext3)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (ext4)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (xfs)][Slow] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (block volmode)] provisioning should mount multiple PV pointing to the same storage on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with any volume data source [Serial]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with mount options
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with pvc data source
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with pvc data source in parallel [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (block volmode)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (block volmode)] volume-stress multiple pods should access different volumes repeatedly [Slow] [Serial]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (block volmode)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] capacity provides storage capacity information
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with different fsgroup applied to the volume contents
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with same fsgroup applied to the volume contents
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with different fsgroup applied to the volume contents
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with same fsgroup skips ownership changes to the volume contents
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] provisioning should mount multiple PV pointing to the same storage on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with any volume data source [Serial]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with mount options
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source in parallel [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] volume-stress multiple pods should access different volumes repeatedly [Slow] [Serial]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (delayed binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (delayed binding)] topology should provision a volume and schedule a pod with AllowedTopologies
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ext3)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ext4)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits should support volume limits [Serial]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits should verify that all csinodes have volume limits
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (immediate binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (immediate binding)] topology should provision a volume and schedule a pod with AllowedTopologies
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should mount multiple PV pointing to the same storage on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with any volume data source [Serial]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with mount options
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with pvc data source
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with pvc data source in parallel [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support existing directory
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support non-existent path
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (xfs)][Slow] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should create read-only inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should create read/write inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should support expansion of pvcs created for ephemeral pvcs
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should support multiple inline ephemeral volumes
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should support two pods which have the same volume definition
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should create read-only inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should create read/write inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should support expansion of pvcs created for ephemeral pvcs
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should support multiple inline ephemeral volumes
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should support two pods which have the same volume definition
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should create read-only inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should create read/write inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support expansion of pvcs created for ephemeral pvcs
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support multiple inline ephemeral volumes
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support two pods which have the same volume definition
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Generic Ephemeral-volume (default fs)] volumeLimits should support volume limits [Serial]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Generic Ephemeral-volume (default fs)] volumeLimits should verify that all csinodes have volume limits
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (default fs)] subPath should support existing directory
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (default fs)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (ext3)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (ext4)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (xfs)][Slow] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (block volmode)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (ext3)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (ext4)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on