Result | FAILURE |
Tests | 21 failed / 853 succeeded |
Started | |
Elapsed | 42m10s |
Revision | master |
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\sExternal\sStorage\s\[Driver\:\sebs\.csi\.aws\.com\]\s\[Testpattern\:\sDynamic\sPV\s\(default\sfs\)\]\ssubPath\sshould\ssupport\sexisting\sdirectories\swhen\sreadOnly\sspecified\sin\sthe\svolumeSource$'
test/e2e/framework/util.go:843 k8s.io/kubernetes/test/e2e/framework.(*Framework).MatchContainerOutput.func1() test/e2e/framework/util.go:843 +0xa7 k8s.io/kubernetes/test/e2e/framework.(*Framework).MatchContainerOutput(0xc001612840, 0xc00173b000, {0xc003a639e0, 0x25}, {0xc002a35e90, 0x1, 0x738e9a8?}, 0x7624530) test/e2e/framework/util.go:852 +0x22f k8s.io/kubernetes/test/e2e/framework.(*Framework).testContainerOutputMatcher(0xc00173b000?, {0x736528f?, 0x0?}, 0xc00173b000, 0x0, {0xc002a35e90, 0x1, 0x1}, 0x22?) test/e2e/framework/util.go:770 +0x15f k8s.io/kubernetes/test/e2e/framework.(*Framework).TestContainerOutput(...) test/e2e/framework/framework.go:581 k8s.io/kubernetes/test/e2e/storage/testsuites.testReadFile(0xc001612840, {0xc003a7a138, 0x16}, 0xc00173b000, 0x0) test/e2e/storage/testsuites/subpath.go:692 +0x15c k8s.io/kubernetes/test/e2e/storage/testsuites.(*subPathTestSuite).DefineTests.func18() test/e2e/storage/testsuites/subpath.go:421 +0x308from junit_01.xml
{"msg":"FAILED External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","completed":0,"skipped":5,"failed":1,"failures":["External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource"]} [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/17/23 22:31:05.25�[0m Jan 17 22:31:05.250: INFO: >>> kubeConfig: /root/.kube/config �[1mSTEP:�[0m Building a namespace api object, basename provisioning �[38;5;243m01/17/23 22:31:05.251�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/17/23 22:31:05.579�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/17/23 22:31:05.79�[0m [It] should support existing directories when readOnly specified in the volumeSource test/e2e/storage/testsuites/subpath.go:396 Jan 17 22:31:06.001: INFO: Creating resource for dynamic PV Jan 17 22:31:06.001: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(ebs.csi.aws.com) supported size:{ 1Mi} �[1mSTEP:�[0m creating a StorageClass provisioning-9408-e2e-scfl5ts �[38;5;243m01/17/23 22:31:06.001�[0m �[1mSTEP:�[0m creating a claim �[38;5;243m01/17/23 22:31:06.109�[0m Jan 17 22:31:06.109: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil �[1mSTEP:�[0m Creating pod pod-subpath-test-dynamicpv-cqdc �[38;5;243m01/17/23 22:31:06.325�[0m �[1mSTEP:�[0m Creating a pod to test subpath �[38;5;243m01/17/23 22:31:06.325�[0m Jan 17 22:31:06.434: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-cqdc" in namespace "provisioning-9408" to be "Succeeded or Failed" Jan 17 22:31:06.570: INFO: Pod "pod-subpath-test-dynamicpv-cqdc": Phase="Pending", Reason="", readiness=false. Elapsed: 135.318546ms Jan 17 22:31:08.677: INFO: Pod "pod-subpath-test-dynamicpv-cqdc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.242838893s Jan 17 22:31:10.694: INFO: Pod "pod-subpath-test-dynamicpv-cqdc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.25942853s Jan 17 22:31:12.678: INFO: Pod "pod-subpath-test-dynamicpv-cqdc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.243361251s Jan 17 22:31:14.684: INFO: Pod "pod-subpath-test-dynamicpv-cqdc": Phase="Pending", Reason="", readiness=false. Elapsed: 8.249116402s Jan 17 22:31:16.678: INFO: Pod "pod-subpath-test-dynamicpv-cqdc": Phase="Pending", Reason="", readiness=false. Elapsed: 10.244004836s Jan 17 22:31:18.692: INFO: Pod "pod-subpath-test-dynamicpv-cqdc": Phase="Pending", Reason="", readiness=false. Elapsed: 12.257326497s Jan 17 22:31:20.677: INFO: Pod "pod-subpath-test-dynamicpv-cqdc": Phase="Pending", Reason="", readiness=false. Elapsed: 14.242530473s Jan 17 22:31:22.678: INFO: Pod "pod-subpath-test-dynamicpv-cqdc": Phase="Pending", Reason="", readiness=false. Elapsed: 16.243176593s Jan 17 22:31:24.696: INFO: Pod "pod-subpath-test-dynamicpv-cqdc": Phase="Pending", Reason="", readiness=false. Elapsed: 18.261218326s Jan 17 22:31:26.679: INFO: Pod "pod-subpath-test-dynamicpv-cqdc": Phase="Pending", Reason="", readiness=false. Elapsed: 20.244304003s Jan 17 22:31:28.696: INFO: Pod "pod-subpath-test-dynamicpv-cqdc": Phase="Pending", Reason="", readiness=false. Elapsed: 22.261253144s Jan 17 22:31:30.680: INFO: Pod "pod-subpath-test-dynamicpv-cqdc": Phase="Pending", Reason="", readiness=false. Elapsed: 24.245879478s Jan 17 22:31:32.678: INFO: Pod "pod-subpath-test-dynamicpv-cqdc": Phase="Pending", Reason="", readiness=false. Elapsed: 26.243188116s Jan 17 22:31:34.678: INFO: Pod "pod-subpath-test-dynamicpv-cqdc": Phase="Pending", Reason="", readiness=false. Elapsed: 28.243629646s Jan 17 22:31:36.677: INFO: Pod "pod-subpath-test-dynamicpv-cqdc": Phase="Pending", Reason="", readiness=false. Elapsed: 30.242975026s Jan 17 22:31:38.678: INFO: Pod "pod-subpath-test-dynamicpv-cqdc": Phase="Pending", Reason="", readiness=false. Elapsed: 32.243485425s Jan 17 22:31:40.678: INFO: Pod "pod-subpath-test-dynamicpv-cqdc": Phase="Pending", Reason="", readiness=false. Elapsed: 34.24321658s Jan 17 22:31:42.678: INFO: Pod "pod-subpath-test-dynamicpv-cqdc": Phase="Pending", Reason="", readiness=false. Elapsed: 36.243574783s Jan 17 22:31:44.680: INFO: Pod "pod-subpath-test-dynamicpv-cqdc": Phase="Pending", Reason="", readiness=false. Elapsed: 38.245101744s Jan 17 22:31:46.719: INFO: Pod "pod-subpath-test-dynamicpv-cqdc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 40.284667829s �[1mSTEP:�[0m Saw pod success �[38;5;243m01/17/23 22:31:46.719�[0m Jan 17 22:31:46.719: INFO: Pod "pod-subpath-test-dynamicpv-cqdc" satisfied condition "Succeeded or Failed" Jan 17 22:31:46.830: INFO: Trying to get logs from node i-07023e4c3916cc727 pod pod-subpath-test-dynamicpv-cqdc container test-container-subpath-dynamicpv-cqdc: <nil> �[1mSTEP:�[0m delete the pod �[38;5;243m01/17/23 22:31:46.94�[0m Jan 17 22:31:47.059: INFO: Waiting for pod pod-subpath-test-dynamicpv-cqdc to disappear Jan 17 22:31:47.168: INFO: Pod pod-subpath-test-dynamicpv-cqdc no longer exists �[1mSTEP:�[0m Deleting pod pod-subpath-test-dynamicpv-cqdc �[38;5;243m01/17/23 22:31:47.168�[0m Jan 17 22:31:47.168: INFO: Deleting pod "pod-subpath-test-dynamicpv-cqdc" in namespace "provisioning-9408" �[1mSTEP:�[0m Creating pod pod-subpath-test-dynamicpv-cqdc �[38;5;243m01/17/23 22:31:47.274�[0m �[1mSTEP:�[0m Creating a pod to test subpath �[38;5;243m01/17/23 22:31:47.274�[0m Jan 17 22:31:47.386: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-cqdc" in namespace "provisioning-9408" to be "Succeeded or Failed" Jan 17 22:31:47.492: INFO: Pod "pod-subpath-test-dynamicpv-cqdc": Phase="Pending", Reason="", readiness=false. Elapsed: 105.812414ms Jan 17 22:31:49.598: INFO: Pod "pod-subpath-test-dynamicpv-cqdc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.212598519s Jan 17 22:32:12.383: INFO: Encountered non-retryable error while getting pod provisioning-9408/pod-subpath-test-dynamicpv-cqdc: Get "https://api.e2e-e2e-kops-grid-kubenet-flatcar-k25-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/provisioning-9408/pods/pod-subpath-test-dynamicpv-cqdc": dial tcp 54.78.31.51:443: connect: connection refused - error from a previous attempt: unexpected EOF �[1mSTEP:�[0m delete the pod �[38;5;243m01/17/23 22:32:12.501�[0m Jan 17 22:32:12.616: FAIL: Failed to delete pod "pod-subpath-test-dynamicpv-cqdc": Delete "https://api.e2e-e2e-kops-grid-kubenet-flatcar-k25-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/provisioning-9408/pods/pod-subpath-test-dynamicpv-cqdc": dial tcp 54.78.31.51:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).MatchContainerOutput.func1() test/e2e/framework/util.go:843 +0xa7 k8s.io/kubernetes/test/e2e/framework.(*Framework).MatchContainerOutput(0xc001612840, 0xc00173b000, {0xc003a639e0, 0x25}, {0xc002a35e90, 0x1, 0x738e9a8?}, 0x7624530) test/e2e/framework/util.go:852 +0x22f k8s.io/kubernetes/test/e2e/framework.(*Framework).testContainerOutputMatcher(0xc00173b000?, {0x736528f?, 0x0?}, 0xc00173b000, 0x0, {0xc002a35e90, 0x1, 0x1}, 0x22?) test/e2e/framework/util.go:770 +0x15f k8s.io/kubernetes/test/e2e/framework.(*Framework).TestContainerOutput(...) test/e2e/framework/framework.go:581 k8s.io/kubernetes/test/e2e/storage/testsuites.testReadFile(0xc001612840, {0xc003a7a138, 0x16}, 0xc00173b000, 0x0) test/e2e/storage/testsuites/subpath.go:692 +0x15c k8s.io/kubernetes/test/e2e/storage/testsuites.(*subPathTestSuite).DefineTests.func18() test/e2e/storage/testsuites/subpath.go:421 +0x308 �[1mSTEP:�[0m Deleting pod �[38;5;243m01/17/23 22:32:12.616�[0m Jan 17 22:32:12.617: INFO: Deleting pod "pod-subpath-test-dynamicpv-cqdc" in namespace "provisioning-9408" �[1mSTEP:�[0m Deleting pvc �[38;5;243m01/17/23 22:32:12.735�[0m Jan 17 22:32:28.159: INFO: Deleting PersistentVolumeClaim "ebs.csi.aws.com9t6dc" �[1mSTEP:�[0m Deleting sc �[38;5;243m01/17/23 22:32:28.284�[0m Jan 17 22:32:28.409: INFO: Unexpected error: while cleaning up resource: <errors.aggregate | len:2, cap:2>: [ <*errors.errorString | 0xc00093c030>{ s: "pod Delete API error: Delete \"https://api.e2e-e2e-kops-grid-kubenet-flatcar-k25-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/provisioning-9408/pods/pod-subpath-test-dynamicpv-cqdc\": dial tcp 54.78.31.51:443: connect: connection refused", }, <errors.aggregate | len:3, cap:4>[ <*fmt.wrapError | 0xc003a99420>{ msg: "failed to find PVC ebs.csi.aws.com9t6dc: Get \"https://api.e2e-e2e-kops-grid-kubenet-flatcar-k25-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/provisioning-9408/persistentvolumeclaims/ebs.csi.aws.com9t6dc\": dial tcp 54.78.31.51:443: connect: connection refused", err: <*url.Error | 0xc003c8be30>{ Op: "Get", URL: "https://api.e2e-e2e-kops-grid-kubenet-flatcar-k25-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/provisioning-9408/persistentvolumeclaims/ebs.csi.aws.com9t6dc", Err: <*net.OpError | 0xc003c96e10>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc003c8be00>{IP: [54, 78, 31, 51], Port: 443, Zone: ""}, Err: <*os.SyscallError | 0xc003a993e0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, }, }, <*fmt.wrapError | 0xc002e51780>{ msg: "failed to delete PVC ebs.csi.aws.com9t6dc: PVC Delete API error: Delete \"https://api.e2e-e2e-kops-grid-kubenet-flatcar-k25-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/provisioning-9408/persistentvolumeclaims/ebs.csi.aws.com9t6dc\": dial tcp 54.78.31.51:443: connect: connection refused", err: <*errors.errorString | 0xc0004cb7d0>{ s: "PVC Delete API error: Delete \"https://api.e2e-e2e-kops-grid-kubenet-flatcar-k25-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/provisioning-9408/persistentvolumeclaims/ebs.csi.aws.com9t6dc\": dial tcp 54.78.31.51:443: connect: connection refused", }, }, <*fmt.wrapError | 0xc002e51900>{ msg: "failed to delete StorageClass provisioning-9408-e2e-scfl5ts: Delete \"https://api.e2e-e2e-kops-grid-kubenet-flatcar-k25-ko26.test-cncf-aws.k8s.io/apis/storage.k8s.io/v1/storageclasses/provisioning-9408-e2e-scfl5ts\": dial tcp 54.78.31.51:443: connect: connection refused", err: <*url.Error | 0xc003d00360>{ Op: "Delete", URL: "https://api.e2e-e2e-kops-grid-kubenet-flatcar-k25-ko26.test-cncf-aws.k8s.io/apis/storage.k8s.io/v1/storageclasses/provisioning-9408-e2e-scfl5ts", Err: <*net.OpError | 0xc002e29770>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc003cc8b70>{IP: [54, 78, 31, 51], Port: 443, Zone: ""}, Err: <*os.SyscallError | 0xc002e518c0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, }, }, ], ] Jan 17 22:32:28.410: FAIL: while cleaning up resource: [pod Delete API error: Delete "https://api.e2e-e2e-kops-grid-kubenet-flatcar-k25-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/provisioning-9408/pods/pod-subpath-test-dynamicpv-cqdc": dial tcp 54.78.31.51:443: connect: connection refused, failed to find PVC ebs.csi.aws.com9t6dc: Get "https://api.e2e-e2e-kops-grid-kubenet-flatcar-k25-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/provisioning-9408/persistentvolumeclaims/ebs.csi.aws.com9t6dc": dial tcp 54.78.31.51:443: connect: connection refused, failed to delete PVC ebs.csi.aws.com9t6dc: PVC Delete API error: Delete "https://api.e2e-e2e-kops-grid-kubenet-flatcar-k25-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/provisioning-9408/persistentvolumeclaims/ebs.csi.aws.com9t6dc": dial tcp 54.78.31.51:443: connect: connection refused, failed to delete StorageClass provisioning-9408-e2e-scfl5ts: Delete "https://api.e2e-e2e-kops-grid-kubenet-flatcar-k25-ko26.test-cncf-aws.k8s.io/apis/storage.k8s.io/v1/storageclasses/provisioning-9408-e2e-scfl5ts": dial tcp 54.78.31.51:443: connect: connection refused] Full Stack Trace k8s.io/kubernetes/test/e2e/storage/testsuites.(*subPathTestSuite).DefineTests.func2() test/e2e/storage/testsuites/subpath.go:184 +0x366 panic({0x6ea2520, 0xc003b7ef00}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1() test/e2e/framework/ginkgowrapper/wrapper.go:73 +0x7d panic({0x6ea4740, 0xc00039c620}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail({0xc002799180, 0x121}, {0xc002a359a0?, 0x735bfcc?, 0xc002a359c8?}) test/e2e/framework/ginkgowrapper/wrapper.go:77 +0x197 k8s.io/kubernetes/test/e2e/framework.Failf({0x73ce3de?, 0x1f?}, {0xc002a35ab8?, 0x0?, 0x0?}) test/e2e/framework/log.go:51 +0x12c k8s.io/kubernetes/test/e2e/framework.(*PodClient).DeleteSync(0xc000d80db0, {0xc000524cc0, 0x1f}, {{{0x0, 0x0}, {0x0, 0x0}}, 0x0, 0x0, 0x0, ...}, ...) test/e2e/framework/pods.go:183 +0x195 k8s.io/kubernetes/test/e2e/framework.(*Framework).MatchContainerOutput.func1() test/e2e/framework/util.go:843 +0xa7 k8s.io/kubernetes/test/e2e/framework.(*Framework).MatchContainerOutput(0xc001612840, 0xc00173b000, {0xc003a639e0, 0x25}, {0xc002a35e90, 0x1, 0x738e9a8?}, 0x7624530) test/e2e/framework/util.go:852 +0x22f k8s.io/kubernetes/test/e2e/framework.(*Framework).testContainerOutputMatcher(0xc00173b000?, {0x736528f?, 0x0?}, 0xc00173b000, 0x0, {0xc002a35e90, 0x1, 0x1}, 0x22?) test/e2e/framework/util.go:770 +0x15f k8s.io/kubernetes/test/e2e/framework.(*Framework).TestContainerOutput(...) test/e2e/framework/framework.go:581 k8s.io/kubernetes/test/e2e/storage/testsuites.testReadFile(0xc001612840, {0xc003a7a138, 0x16}, 0xc00173b000, 0x0) test/e2e/storage/testsuites/subpath.go:692 +0x15c k8s.io/kubernetes/test/e2e/storage/testsuites.(*subPathTestSuite).DefineTests.func18() test/e2e/storage/testsuites/subpath.go:421 +0x308 [AfterEach] [Testpattern: Dynamic PV (default fs)] subPath test/e2e/framework/framework.go:187 �[1mSTEP:�[0m Collecting events from namespace "provisioning-9408". �[38;5;243m01/17/23 22:32:28.41�[0m Jan 17 22:32:28.531: INFO: Unexpected error: failed to list events in namespace "provisioning-9408": <*url.Error | 0xc003c3bf50>: { Op: "Get", URL: "https://api.e2e-e2e-kops-grid-kubenet-flatcar-k25-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/provisioning-9408/events", Err: <*net.OpError | 0xc003c01d10>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc003bd9740>{IP: [54, 78, 31, 51], Port: 443, Zone: ""}, Err: <*os.SyscallError | 0xc003c1e400>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Jan 17 22:32:28.531: FAIL: failed to list events in namespace "provisioning-9408": Get "https://api.e2e-e2e-kops-grid-kubenet-flatcar-k25-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/provisioning-9408/events": dial tcp 54.78.31.51:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework.dumpEventsInNamespace(0xc003caf590, {0xc002ccf4b8, 0x11}) test/e2e/framework/util.go:901 +0x191 k8s.io/kubernetes/test/e2e/framework.DumpAllNamespaceInfo({0x7ca2818, 0xc002aa3e00}, {0xc002ccf4b8, 0x11}) test/e2e/framework/util.go:919 +0x8d k8s.io/kubernetes/test/e2e/framework.NewFramework.func1(0xc001612840, 0x2?) test/e2e/framework/framework.go:181 +0x8b k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc001612840) test/e2e/framework/framework.go:435 +0x21d �[1mSTEP:�[0m Destroying namespace "provisioning-9408" for this suite. �[38;5;243m01/17/23 22:32:28.532�[0m Jan 17 22:32:28.671: FAIL: Couldn't delete ns: "provisioning-9408": Delete "https://api.e2e-e2e-kops-grid-kubenet-flatcar-k25-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/provisioning-9408": dial tcp 54.78.31.51:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://api.e2e-e2e-kops-grid-kubenet-flatcar-k25-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/provisioning-9408", Err:(*net.OpError)(0xc003bdae10)}) Full Stack Trace panic({0x6ea2520, 0xc003b7f9c0}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1() test/e2e/framework/ginkgowrapper/wrapper.go:73 +0x7d panic({0x6ea4740, 0xc00039dab0}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail({0xc003c6e400, 0x100}, {0xc003caf048?, 0x735bfcc?, 0xc003caf068?}) test/e2e/framework/ginkgowrapper/wrapper.go:77 +0x197 k8s.io/kubernetes/test/e2e/framework.Fail({0xc00339ae10, 0xeb}, {0xc003caf0e0?, 0xc003c6c780?, 0xc003caf108?}) test/e2e/framework/log.go:63 +0x145 k8s.io/kubernetes/test/e2e/framework.ExpectNoErrorWithOffset(0x1, {0x7c34da0, 0xc003c3bf50}, {0xc003c1e440?, 0x0?, 0x0?}) test/e2e/framework/expect.go:76 +0x267 k8s.io/kubernetes/test/e2e/framework.ExpectNoError(...) test/e2e/framework/expect.go:43 k8s.io/kubernetes/test/e2e/framework.dumpEventsInNamespace(0xc003caf590, {0xc002ccf4b8, 0x11}) test/e2e/framework/util.go:901 +0x191 k8s.io/kubernetes/test/e2e/framework.DumpAllNamespaceInfo({0x7ca2818, 0xc002aa3e00}, {0xc002ccf4b8, 0x11}) test/e2e/framework/util.go:919 +0x8d k8s.io/kubernetes/test/e2e/framework.NewFramework.func1(0xc001612840, 0x2?) test/e2e/framework/framework.go:181 +0x8b k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc001612840) test/e2e/framework/framework.go:435 +0x21d
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\sExternal\sStorage\s\[Driver\:\sebs\.csi\.aws\.com\]\s\[Testpattern\:\sDynamic\sPV\s\(ext4\)\]\svolumes\sshould\sallow\sexec\sof\sfiles\son\sthe\svolume$'
test/e2e/framework/util.go:843 k8s.io/kubernetes/test/e2e/framework.(*Framework).MatchContainerOutput.func1() test/e2e/framework/util.go:843 +0xa7 k8s.io/kubernetes/test/e2e/framework.(*Framework).MatchContainerOutput(0xc0015ccdc0, 0xc0017bb800, {0xc0034a3820, 0x1d}, {0xc002e07ed8, 0x1, 0x64f79e0?}, 0x7624530) test/e2e/framework/util.go:852 +0x22f k8s.io/kubernetes/test/e2e/framework.(*Framework).testContainerOutputMatcher(0xc0031f9ce0?, {0x7388cdb?, 0x0?}, 0xc0017bb800, 0x0, {0xc002e07ed8, 0x1, 0x1}, 0xc0004a1730?) test/e2e/framework/util.go:770 +0x15f k8s.io/kubernetes/test/e2e/framework.(*Framework).TestContainerOutput(...) test/e2e/framework/framework.go:581 k8s.io/kubernetes/test/e2e/storage/testsuites.testScriptInPod(0xc0015ccdc0, {0x7368fb1?, 0xc0001eb5d0?}, 0xc00156ad20, 0xc0031fc8a0) test/e2e/storage/testsuites/volumes.go:257 +0x6aa k8s.io/kubernetes/test/e2e/storage/testsuites.(*volumesTestSuite).DefineTests.func4() test/e2e/storage/testsuites/volumes.go:203 +0xb1from junit_01.xml
{"msg":"FAILED External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ext4)] volumes should allow exec of files on the volume","completed":0,"skipped":1,"failed":1,"failures":["External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ext4)] volumes should allow exec of files on the volume"]} [BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/17/23 22:31:05.641�[0m Jan 17 22:31:05.642: INFO: >>> kubeConfig: /root/.kube/config �[1mSTEP:�[0m Building a namespace api object, basename volume �[38;5;243m01/17/23 22:31:05.654�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/17/23 22:31:05.974�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/17/23 22:31:06.184�[0m [It] should allow exec of files on the volume test/e2e/storage/testsuites/volumes.go:198 Jan 17 22:31:06.394: INFO: Creating resource for dynamic PV Jan 17 22:31:06.394: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(ebs.csi.aws.com) supported size:{ 1Mi} �[1mSTEP:�[0m creating a StorageClass volume-8782-e2e-scdzxx5 �[38;5;243m01/17/23 22:31:06.395�[0m �[1mSTEP:�[0m creating a claim �[38;5;243m01/17/23 22:31:06.525�[0m Jan 17 22:31:06.525: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil �[1mSTEP:�[0m Creating pod exec-volume-test-dynamicpv-plg8 �[38;5;243m01/17/23 22:31:06.78�[0m �[1mSTEP:�[0m Creating a pod to test exec-volume-test �[38;5;243m01/17/23 22:31:06.78�[0m Jan 17 22:31:06.914: INFO: Waiting up to 5m0s for pod "exec-volume-test-dynamicpv-plg8" in namespace "volume-8782" to be "Succeeded or Failed" Jan 17 22:31:07.028: INFO: Pod "exec-volume-test-dynamicpv-plg8": Phase="Pending", Reason="", readiness=false. Elapsed: 113.662762ms Jan 17 22:31:09.134: INFO: Pod "exec-volume-test-dynamicpv-plg8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.219785467s Jan 17 22:31:11.135: INFO: Pod "exec-volume-test-dynamicpv-plg8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.220820832s Jan 17 22:31:13.142: INFO: Pod "exec-volume-test-dynamicpv-plg8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.227871128s Jan 17 22:31:15.176: INFO: Pod "exec-volume-test-dynamicpv-plg8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.261920306s Jan 17 22:31:17.136: INFO: Pod "exec-volume-test-dynamicpv-plg8": Phase="Pending", Reason="", readiness=false. Elapsed: 10.221383215s Jan 17 22:31:19.186: INFO: Pod "exec-volume-test-dynamicpv-plg8": Phase="Pending", Reason="", readiness=false. Elapsed: 12.271257258s Jan 17 22:31:21.148: INFO: Pod "exec-volume-test-dynamicpv-plg8": Phase="Pending", Reason="", readiness=false. Elapsed: 14.233662418s Jan 17 22:31:23.136: INFO: Pod "exec-volume-test-dynamicpv-plg8": Phase="Pending", Reason="", readiness=false. Elapsed: 16.221271619s Jan 17 22:31:25.139: INFO: Pod "exec-volume-test-dynamicpv-plg8": Phase="Pending", Reason="", readiness=false. Elapsed: 18.224118251s Jan 17 22:31:27.137: INFO: Pod "exec-volume-test-dynamicpv-plg8": Phase="Pending", Reason="", readiness=false. Elapsed: 20.222944343s Jan 17 22:31:29.136: INFO: Pod "exec-volume-test-dynamicpv-plg8": Phase="Pending", Reason="", readiness=false. Elapsed: 22.221202737s Jan 17 22:31:31.138: INFO: Pod "exec-volume-test-dynamicpv-plg8": Phase="Pending", Reason="", readiness=false. Elapsed: 24.223596402s Jan 17 22:31:33.135: INFO: Pod "exec-volume-test-dynamicpv-plg8": Phase="Pending", Reason="", readiness=false. Elapsed: 26.220891533s Jan 17 22:31:35.138: INFO: Pod "exec-volume-test-dynamicpv-plg8": Phase="Pending", Reason="", readiness=false. Elapsed: 28.223915755s Jan 17 22:31:37.134: INFO: Pod "exec-volume-test-dynamicpv-plg8": Phase="Pending", Reason="", readiness=false. Elapsed: 30.219901509s Jan 17 22:31:39.137: INFO: Pod "exec-volume-test-dynamicpv-plg8": Phase="Pending", Reason="", readiness=false. Elapsed: 32.2226563s Jan 17 22:31:41.136: INFO: Pod "exec-volume-test-dynamicpv-plg8": Phase="Pending", Reason="", readiness=false. Elapsed: 34.221312259s Jan 17 22:31:43.155: INFO: Pod "exec-volume-test-dynamicpv-plg8": Phase="Pending", Reason="", readiness=false. Elapsed: 36.240715097s Jan 17 22:31:45.135: INFO: Pod "exec-volume-test-dynamicpv-plg8": Phase="Pending", Reason="", readiness=false. Elapsed: 38.220323386s Jan 17 22:31:47.138: INFO: Pod "exec-volume-test-dynamicpv-plg8": Phase="Pending", Reason="", readiness=false. Elapsed: 40.223293903s Jan 17 22:31:49.135: INFO: Pod "exec-volume-test-dynamicpv-plg8": Phase="Pending", Reason="", readiness=false. Elapsed: 42.220346148s Jan 17 22:31:51.194: INFO: Pod "exec-volume-test-dynamicpv-plg8": Phase="Pending", Reason="", readiness=false. Elapsed: 44.279647672s Jan 17 22:32:12.365: INFO: Encountered non-retryable error while getting pod volume-8782/exec-volume-test-dynamicpv-plg8: Get "https://api.e2e-e2e-kops-grid-kubenet-flatcar-k25-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/volume-8782/pods/exec-volume-test-dynamicpv-plg8": dial tcp 54.78.31.51:443: connect: connection refused - error from a previous attempt: unexpected EOF �[1mSTEP:�[0m delete the pod �[38;5;243m01/17/23 22:32:12.481�[0m Jan 17 22:32:12.597: FAIL: Failed to delete pod "exec-volume-test-dynamicpv-plg8": Delete "https://api.e2e-e2e-kops-grid-kubenet-flatcar-k25-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/volume-8782/pods/exec-volume-test-dynamicpv-plg8": dial tcp 54.78.31.51:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).MatchContainerOutput.func1() test/e2e/framework/util.go:843 +0xa7 k8s.io/kubernetes/test/e2e/framework.(*Framework).MatchContainerOutput(0xc0015ccdc0, 0xc0017bb800, {0xc0034a3820, 0x1d}, {0xc002e07ed8, 0x1, 0x64f79e0?}, 0x7624530) test/e2e/framework/util.go:852 +0x22f k8s.io/kubernetes/test/e2e/framework.(*Framework).testContainerOutputMatcher(0xc0031f9ce0?, {0x7388cdb?, 0x0?}, 0xc0017bb800, 0x0, {0xc002e07ed8, 0x1, 0x1}, 0xc0004a1730?) test/e2e/framework/util.go:770 +0x15f k8s.io/kubernetes/test/e2e/framework.(*Framework).TestContainerOutput(...) test/e2e/framework/framework.go:581 k8s.io/kubernetes/test/e2e/storage/testsuites.testScriptInPod(0xc0015ccdc0, {0x7368fb1?, 0xc0001eb5d0?}, 0xc00156ad20, 0xc0031fc8a0) test/e2e/storage/testsuites/volumes.go:257 +0x6aa k8s.io/kubernetes/test/e2e/storage/testsuites.(*volumesTestSuite).DefineTests.func4() test/e2e/storage/testsuites/volumes.go:203 +0xb1 �[1mSTEP:�[0m Deleting pvc �[38;5;243m01/17/23 22:32:12.597�[0m Jan 17 22:32:12.717: INFO: Deleting PersistentVolumeClaim "ebs.csi.aws.com5ngng" �[1mSTEP:�[0m Deleting sc �[38;5;243m01/17/23 22:32:12.835�[0m Jan 17 22:32:28.415: INFO: Unexpected error: while cleaning up resource: <errors.aggregate | len:1, cap:1>: [ <errors.aggregate | len:3, cap:4>[ <*fmt.wrapError | 0xc0039caec0>{ msg: "failed to find PVC ebs.csi.aws.com5ngng: Get \"https://api.e2e-e2e-kops-grid-kubenet-flatcar-k25-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/volume-8782/persistentvolumeclaims/ebs.csi.aws.com5ngng\": dial tcp 54.78.31.51:443: connect: connection refused", err: <*url.Error | 0xc000f62120>{ Op: "Get", URL: "https://api.e2e-e2e-kops-grid-kubenet-flatcar-k25-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/volume-8782/persistentvolumeclaims/ebs.csi.aws.com5ngng", Err: <*net.OpError | 0xc0039c9e00>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0031bbe60>{IP: [54, 78, 31, 51], Port: 443, Zone: ""}, Err: <*os.SyscallError | 0xc0039cae80>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, }, }, <*fmt.wrapError | 0xc003720a00>{ msg: "failed to delete PVC ebs.csi.aws.com5ngng: PVC Delete API error: Delete \"https://api.e2e-e2e-kops-grid-kubenet-flatcar-k25-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/volume-8782/persistentvolumeclaims/ebs.csi.aws.com5ngng\": dial tcp 54.78.31.51:443: connect: connection refused", err: <*errors.errorString | 0xc000bc1330>{ s: "PVC Delete API error: Delete \"https://api.e2e-e2e-kops-grid-kubenet-flatcar-k25-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/volume-8782/persistentvolumeclaims/ebs.csi.aws.com5ngng\": dial tcp 54.78.31.51:443: connect: connection refused", }, }, <*fmt.wrapError | 0xc0039cafa0>{ msg: "failed to delete StorageClass volume-8782-e2e-scdzxx5: Delete \"https://api.e2e-e2e-kops-grid-kubenet-flatcar-k25-ko26.test-cncf-aws.k8s.io/apis/storage.k8s.io/v1/storageclasses/volume-8782-e2e-scdzxx5\": dial tcp 54.78.31.51:443: connect: connection refused", err: <*url.Error | 0xc000f62a20>{ Op: "Delete", URL: "https://api.e2e-e2e-kops-grid-kubenet-flatcar-k25-ko26.test-cncf-aws.k8s.io/apis/storage.k8s.io/v1/storageclasses/volume-8782-e2e-scdzxx5", Err: <*net.OpError | 0xc000f74460>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc000cd8fc0>{IP: [54, 78, 31, 51], Port: 443, Zone: ""}, Err: <*os.SyscallError | 0xc0039caf60>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, }, }, ], ] Jan 17 22:32:28.416: FAIL: while cleaning up resource: [failed to find PVC ebs.csi.aws.com5ngng: Get "https://api.e2e-e2e-kops-grid-kubenet-flatcar-k25-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/volume-8782/persistentvolumeclaims/ebs.csi.aws.com5ngng": dial tcp 54.78.31.51:443: connect: connection refused, failed to delete PVC ebs.csi.aws.com5ngng: PVC Delete API error: Delete "https://api.e2e-e2e-kops-grid-kubenet-flatcar-k25-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/volume-8782/persistentvolumeclaims/ebs.csi.aws.com5ngng": dial tcp 54.78.31.51:443: connect: connection refused, failed to delete StorageClass volume-8782-e2e-scdzxx5: Delete "https://api.e2e-e2e-kops-grid-kubenet-flatcar-k25-ko26.test-cncf-aws.k8s.io/apis/storage.k8s.io/v1/storageclasses/volume-8782-e2e-scdzxx5": dial tcp 54.78.31.51:443: connect: connection refused] Full Stack Trace k8s.io/kubernetes/test/e2e/storage/testsuites.(*volumesTestSuite).DefineTests.func2() test/e2e/storage/testsuites/volumes.go:157 +0x22e panic({0x6ea2520, 0xc0010ca6c0}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1() test/e2e/framework/ginkgowrapper/wrapper.go:73 +0x7d panic({0x6ea4740, 0xc000152690}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail({0xc003180120, 0x11b}, {0xc002e07970?, 0x735bfcc?, 0xc002e07998?}) test/e2e/framework/ginkgowrapper/wrapper.go:77 +0x197 k8s.io/kubernetes/test/e2e/framework.Failf({0x73ce3de?, 0x1f?}, {0xc002e07a88?, 0x0?, 0x0?}) test/e2e/framework/log.go:51 +0x12c k8s.io/kubernetes/test/e2e/framework.(*PodClient).DeleteSync(0xc0015c54e8, {0xc0034a3ce0, 0x1f}, {{{0x0, 0x0}, {0x0, 0x0}}, 0x0, 0x0, 0x0, ...}, ...) test/e2e/framework/pods.go:183 +0x195 k8s.io/kubernetes/test/e2e/framework.(*Framework).MatchContainerOutput.func1() test/e2e/framework/util.go:843 +0xa7 k8s.io/kubernetes/test/e2e/framework.(*Framework).MatchContainerOutput(0xc0015ccdc0, 0xc0017bb800, {0xc0034a3820, 0x1d}, {0xc002e07ed8, 0x1, 0x64f79e0?}, 0x7624530) test/e2e/framework/util.go:852 +0x22f k8s.io/kubernetes/test/e2e/framework.(*Framework).testContainerOutputMatcher(0xc0031f9ce0?, {0x7388cdb?, 0x0?}, 0xc0017bb800, 0x0, {0xc002e07ed8, 0x1, 0x1}, 0xc0004a1730?) test/e2e/framework/util.go:770 +0x15f k8s.io/kubernetes/test/e2e/framework.(*Framework).TestContainerOutput(...) test/e2e/framework/framework.go:581 k8s.io/kubernetes/test/e2e/storage/testsuites.testScriptInPod(0xc0015ccdc0, {0x7368fb1?, 0xc0001eb5d0?}, 0xc00156ad20, 0xc0031fc8a0) test/e2e/storage/testsuites/volumes.go:257 +0x6aa k8s.io/kubernetes/test/e2e/storage/testsuites.(*volumesTestSuite).DefineTests.func4() test/e2e/storage/testsuites/volumes.go:203 +0xb1 [AfterEach] [Testpattern: Dynamic PV (ext4)] volumes test/e2e/framework/framework.go:187 �[1mSTEP:�[0m Collecting events from namespace "volume-8782". �[38;5;243m01/17/23 22:32:28.416�[0m Jan 17 22:32:28.555: INFO: Unexpected error: failed to list events in namespace "volume-8782": <*url.Error | 0xc000f631d0>: { Op: "Get", URL: "https://api.e2e-e2e-kops-grid-kubenet-flatcar-k25-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/volume-8782/events", Err: <*net.OpError | 0xc000f74780>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc000cc3050>{IP: [54, 78, 31, 51], Port: 443, Zone: ""}, Err: <*os.SyscallError | 0xc0039cb520>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Jan 17 22:32:28.555: FAIL: failed to list events in namespace "volume-8782": Get "https://api.e2e-e2e-kops-grid-kubenet-flatcar-k25-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/volume-8782/events": dial tcp 54.78.31.51:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework.dumpEventsInNamespace(0xc001659590, {0xc0035ca180, 0xb}) test/e2e/framework/util.go:901 +0x191 k8s.io/kubernetes/test/e2e/framework.DumpAllNamespaceInfo({0x7ca2818, 0xc0003cca80}, {0xc0035ca180, 0xb}) test/e2e/framework/util.go:919 +0x8d k8s.io/kubernetes/test/e2e/framework.NewFramework.func1(0xc0015ccdc0, 0x2?) test/e2e/framework/framework.go:181 +0x8b k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc0015ccdc0) test/e2e/framework/framework.go:435 +0x21d �[1mSTEP:�[0m Destroying namespace "volume-8782" for this suite. �[38;5;243m01/17/23 22:32:28.556�[0m Jan 17 22:32:28.676: FAIL: Couldn't delete ns: "volume-8782": Delete "https://api.e2e-e2e-kops-grid-kubenet-flatcar-k25-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/volume-8782": dial tcp 54.78.31.51:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://api.e2e-e2e-kops-grid-kubenet-flatcar-k25-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/volume-8782", Err:(*net.OpError)(0xc000cdee10)}) Full Stack Trace panic({0x6ea2520, 0xc00163c240}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1() test/e2e/framework/ginkgowrapper/wrapper.go:73 +0x7d panic({0x6ea4740, 0xc00044ed20}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail({0xc00165c000, 0xf4}, {0xc001659048?, 0x735bfcc?, 0xc001659068?}) test/e2e/framework/ginkgowrapper/wrapper.go:77 +0x197 k8s.io/kubernetes/test/e2e/framework.Fail({0xc0007081c0, 0xdf}, {0xc0016590e0?, 0xc0032836b0?, 0xc001659108?}) test/e2e/framework/log.go:63 +0x145 k8s.io/kubernetes/test/e2e/framework.ExpectNoErrorWithOffset(0x1, {0x7c34da0, 0xc000f631d0}, {0xc0039cb560?, 0x0?, 0x0?}) test/e2e/framework/expect.go:76 +0x267 k8s.io/kubernetes/test/e2e/framework.ExpectNoError(...) test/e2e/framework/expect.go:43 k8s.io/kubernetes/test/e2e/framework.dumpEventsInNamespace(0xc001659590, {0xc0035ca180, 0xb}) test/e2e/framework/util.go:901 +0x191 k8s.io/kubernetes/test/e2e/framework.DumpAllNamespaceInfo({0x7ca2818, 0xc0003cca80}, {0xc0035ca180, 0xb}) test/e2e/framework/util.go:919 +0x8d k8s.io/kubernetes/test/e2e/framework.NewFramework.func1(0xc0015ccdc0, 0x2?) test/e2e/framework/framework.go:181 +0x8b k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc0015ccdc0) test/e2e/framework/framework.go:435 +0x21d
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-api\-machinery\]\sAggregator\sShould\sbe\sable\sto\ssupport\sthe\s1\.17\sSample\sAPI\sServer\susing\sthe\scurrent\sAggregator\s\[Conformance\]$'
test/e2e/apimachinery/aggregator.go:333 k8s.io/kubernetes/test/e2e/apimachinery.TestSampleAPIServer(0xc000db1a20, 0xc000d67fc8, {0xc001de9b00, 0x37}) test/e2e/apimachinery/aggregator.go:333 +0x2be5 k8s.io/kubernetes/test/e2e/apimachinery.glob..func1.3() test/e2e/apimachinery/aggregator.go:102 +0x125from junit_01.xml
{"msg":"FAILED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","completed":1,"skipped":45,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]} [BeforeEach] [sig-api-machinery] Aggregator test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/17/23 22:31:41.088�[0m Jan 17 22:31:41.088: INFO: >>> kubeConfig: /root/.kube/config �[1mSTEP:�[0m Building a namespace api object, basename aggregator �[38;5;243m01/17/23 22:31:41.089�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/17/23 22:31:41.431�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/17/23 22:31:41.647�[0m [BeforeEach] [sig-api-machinery] Aggregator test/e2e/apimachinery/aggregator.go:78 Jan 17 22:31:41.864: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] test/e2e/apimachinery/aggregator.go:100 �[1mSTEP:�[0m Registering the sample API server. �[38;5;243m01/17/23 22:31:41.865�[0m Jan 17 22:31:43.623: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 17, 22, 31, 42, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 17, 22, 31, 42, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 17, 22, 31, 42, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 17, 22, 31, 42, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-5885c99c55\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 17 22:31:45.732: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 17, 22, 31, 42, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 17, 22, 31, 42, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 17, 22, 31, 42, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 17, 22, 31, 42, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-5885c99c55\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 17 22:31:47.732: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 17, 22, 31, 42, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 17, 22, 31, 42, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 17, 22, 31, 42, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 17, 22, 31, 42, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-5885c99c55\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 17 22:31:49.732: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 17, 22, 31, 42, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 17, 22, 31, 42, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 17, 22, 31, 42, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 17, 22, 31, 42, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-5885c99c55\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 17 22:32:12.365: INFO: Unexpected error: deploying extension apiserver in namespace aggregator-9717: <*errors.errorString | 0xc00128f050>: { s: "error waiting for deployment \"sample-apiserver-deployment\" status to match expectation: Get \"https://api.e2e-e2e-kops-grid-kubenet-flatcar-k25-ko26.test-cncf-aws.k8s.io/apis/apps/v1/namespaces/aggregator-9717/deployments/sample-apiserver-deployment\": dial tcp 54.78.31.51:443: connect: connection refused - error from a previous attempt: unexpected EOF", } Jan 17 22:32:12.365: FAIL: deploying extension apiserver in namespace aggregator-9717: error waiting for deployment "sample-apiserver-deployment" status to match expectation: Get "https://api.e2e-e2e-kops-grid-kubenet-flatcar-k25-ko26.test-cncf-aws.k8s.io/apis/apps/v1/namespaces/aggregator-9717/deployments/sample-apiserver-deployment": dial tcp 54.78.31.51:443: connect: connection refused - error from a previous attempt: unexpected EOF Full Stack Trace k8s.io/kubernetes/test/e2e/apimachinery.TestSampleAPIServer(0xc000db1a20, 0xc000d67fc8, {0xc001de9b00, 0x37}) test/e2e/apimachinery/aggregator.go:333 +0x2be5 k8s.io/kubernetes/test/e2e/apimachinery.glob..func1.3() test/e2e/apimachinery/aggregator.go:102 +0x125 [AfterEach] [sig-api-machinery] Aggregator test/e2e/apimachinery/aggregator.go:68 [AfterEach] [sig-api-machinery] Aggregator test/e2e/framework/framework.go:187 �[1mSTEP:�[0m Collecting events from namespace "aggregator-9717". �[38;5;243m01/17/23 22:32:28.905�[0m Jan 17 22:32:29.025: INFO: Unexpected error: failed to list events in namespace "aggregator-9717": <*url.Error | 0xc00359d860>: { Op: "Get", URL: "https://api.e2e-e2e-kops-grid-kubenet-flatcar-k25-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/aggregator-9717/events", Err: <*net.OpError | 0xc0035dc1e0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0034f3050>{IP: [54, 78, 31, 51], Port: 443, Zone: ""}, Err: <*os.SyscallError | 0xc00340c940>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Jan 17 22:32:29.025: FAIL: failed to list events in namespace "aggregator-9717": Get "https://api.e2e-e2e-kops-grid-kubenet-flatcar-k25-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/aggregator-9717/events": dial tcp 54.78.31.51:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework.dumpEventsInNamespace(0xc0008d9590, {0xc002474390, 0xf}) test/e2e/framework/util.go:901 +0x191 k8s.io/kubernetes/test/e2e/framework.DumpAllNamespaceInfo({0x7ca2818, 0xc002210780}, {0xc002474390, 0xf}) test/e2e/framework/util.go:919 +0x8d k8s.io/kubernetes/test/e2e/framework.NewFramework.func1(0xc000db1a20, 0x1?) test/e2e/framework/framework.go:181 +0x8b k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc000db1a20) test/e2e/framework/framework.go:435 +0x21d �[1mSTEP:�[0m Destroying namespace "aggregator-9717" for this suite. �[38;5;243m01/17/23 22:32:29.025�[0m Jan 17 22:32:29.143: FAIL: Couldn't delete ns: "aggregator-9717": Delete "https://api.e2e-e2e-kops-grid-kubenet-flatcar-k25-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/aggregator-9717": dial tcp 54.78.31.51:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://api.e2e-e2e-kops-grid-kubenet-flatcar-k25-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/aggregator-9717", Err:(*net.OpError)(0xc003525680)}) Full Stack Trace panic({0x6ea2520, 0xc0011fbf40}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1() test/e2e/framework/ginkgowrapper/wrapper.go:73 +0x7d panic({0x6ea4740, 0xc00038e690}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail({0xc00382a400, 0xfc}, {0xc0008d9048?, 0x735bfcc?, 0xc0008d9068?}) test/e2e/framework/ginkgowrapper/wrapper.go:77 +0x197 k8s.io/kubernetes/test/e2e/framework.Fail({0xc0027223c0, 0xe7}, {0xc0008d90e0?, 0xc0035c3080?, 0xc0008d9108?}) test/e2e/framework/log.go:63 +0x145 k8s.io/kubernetes/test/e2e/framework.ExpectNoErrorWithOffset(0x1, {0x7c34da0, 0xc00359d860}, {0xc00340c980?, 0x0?, 0x0?}) test/e2e/framework/expect.go:76 +0x267 k8s.io/kubernetes/test/e2e/framework.ExpectNoError(...) test/e2e/framework/expect.go:43 k8s.io/kubernetes/test/e2e/framework.dumpEventsInNamespace(0xc0008d9590, {0xc002474390, 0xf}) test/e2e/framework/util.go:901 +0x191 k8s.io/kubernetes/test/e2e/framework.DumpAllNamespaceInfo({0x7ca2818, 0xc002210780}, {0xc002474390, 0xf}) test/e2e/framework/util.go:919 +0x8d k8s.io/kubernetes/test/e2e/framework.NewFramework.func1(0xc000db1a20, 0x1?) test/e2e/framework/framework.go:181 +0x8b k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc000db1a20) test/e2e/framework/framework.go:435 +0x21d
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-api\-machinery\]\sGarbage\scollector\sshould\sdelete\sjobs\sand\spods\screated\sby\scronjob$'
vendor/github.com/onsi/ginkgo/v2/internal/suite.go:605from junit_01.xml
{"msg":"FAILED [sig-api-machinery] Garbage collector should delete jobs and pods created by cronjob","completed":3,"skipped":74,"failed":1,"failures":["[sig-api-machinery] Garbage collector should delete jobs and pods created by cronjob"]} [BeforeEach] [sig-api-machinery] Garbage collector test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/17/23 22:31:44.859�[0m Jan 17 22:31:44.860: INFO: >>> kubeConfig: /root/.kube/config �[1mSTEP:�[0m Building a namespace api object, basename gc �[38;5;243m01/17/23 22:31:44.861�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/17/23 22:31:45.183�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/17/23 22:31:45.392�[0m [It] should delete jobs and pods created by cronjob test/e2e/apimachinery/garbage_collector.go:1145 �[1mSTEP:�[0m Create the cronjob �[38;5;243m01/17/23 22:31:45.599�[0m �[1mSTEP:�[0m Wait for the CronJob to create new Job �[38;5;243m01/17/23 22:31:45.71�[0m Jan 17 22:32:12.349: FAIL: Failed to wait for the CronJob to create some Jobs: failed to list jobs: Get "https://api.e2e-e2e-kops-grid-kubenet-flatcar-k25-ko26.test-cncf-aws.k8s.io/apis/batch/v1/namespaces/gc-3835/jobs": dial tcp 54.78.31.51:443: connect: connection refused - error from a previous attempt: unexpected EOF Full Stack Trace [AfterEach] [sig-api-machinery] Garbage collector test/e2e/framework/framework.go:187 �[1mSTEP:�[0m Collecting events from namespace "gc-3835". �[38;5;243m01/17/23 22:32:12.349�[0m Jan 17 22:32:12.467: INFO: Unexpected error: failed to list events in namespace "gc-3835": <*url.Error | 0xc00420c3f0>: { Op: "Get", URL: "https://api.e2e-e2e-kops-grid-kubenet-flatcar-k25-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/gc-3835/events", Err: <*net.OpError | 0xc000c7e2d0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc003d0a450>{IP: [54, 78, 31, 51], Port: 443, Zone: ""}, Err: <*os.SyscallError | 0xc0008562e0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Jan 17 22:32:12.467: FAIL: failed to list events in namespace "gc-3835": Get "https://api.e2e-e2e-kops-grid-kubenet-flatcar-k25-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/gc-3835/events": dial tcp 54.78.31.51:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework.dumpEventsInNamespace(0xc00427d590, {0xc004166049, 0x7}) test/e2e/framework/util.go:901 +0x191 k8s.io/kubernetes/test/e2e/framework.DumpAllNamespaceInfo({0x7ca2818, 0xc002d4cf00}, {0xc004166049, 0x7}) test/e2e/framework/util.go:919 +0x8d k8s.io/kubernetes/test/e2e/framework.NewFramework.func1(0xc0008511e0, 0x1?) test/e2e/framework/framework.go:181 +0x8b k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc0008511e0) test/e2e/framework/framework.go:435 +0x21d �[1mSTEP:�[0m Destroying namespace "gc-3835" for this suite. �[38;5;243m01/17/23 22:32:12.467�[0m Jan 17 22:32:12.585: FAIL: Couldn't delete ns: "gc-3835": Delete "https://api.e2e-e2e-kops-grid-kubenet-flatcar-k25-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/gc-3835": dial tcp 54.78.31.51:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://api.e2e-e2e-kops-grid-kubenet-flatcar-k25-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/gc-3835", Err:(*net.OpError)(0xc0043384b0)}) Full Stack Trace panic({0x6ea2520, 0xc00323c680}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1() test/e2e/framework/ginkgowrapper/wrapper.go:73 +0x7d panic({0x6ea4740, 0xc000258540}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail({0xc0001b64b0, 0xec}, {0xc00427d048?, 0x735bfcc?, 0xc00427d068?}) test/e2e/framework/ginkgowrapper/wrapper.go:77 +0x197 k8s.io/kubernetes/test/e2e/framework.Fail({0xc0009a00e0, 0xd7}, {0xc00427d0e0?, 0xc0002c2b00?, 0xc00427d108?}) test/e2e/framework/log.go:63 +0x145 k8s.io/kubernetes/test/e2e/framework.ExpectNoErrorWithOffset(0x1, {0x7c34da0, 0xc00420c3f0}, {0xc000856340?, 0x0?, 0x0?}) test/e2e/framework/expect.go:76 +0x267 k8s.io/kubernetes/test/e2e/framework.ExpectNoError(...) test/e2e/framework/expect.go:43 k8s.io/kubernetes/test/e2e/framework.dumpEventsInNamespace(0xc00427d590, {0xc004166049, 0x7}) test/e2e/framework/util.go:901 +0x191 k8s.io/kubernetes/test/e2e/framework.DumpAllNamespaceInfo({0x7ca2818, 0xc002d4cf00}, {0xc004166049, 0x7}) test/e2e/framework/util.go:919 +0x8d k8s.io/kubernetes/test/e2e/framework.NewFramework.func1(0xc0008511e0, 0x1?) test/e2e/framework/framework.go:181 +0x8b k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc0008511e0) test/e2e/framework/framework.go:435 +0x21d
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-api\-machinery\]\sGarbage\scollector\sshould\sorphan\spods\screated\sby\src\sif\sdelete\soptions\ssay\sso\s\[Conformance\]$'
vendor/github.com/onsi/ginkgo/v2/internal/suite.go:605from junit_01.xml
{"msg":"FAILED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","completed":2,"skipped":31,"failed":1,"failures":["[sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]"]} [BeforeEach] [sig-api-machinery] Garbage collector test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/17/23 22:31:17.558�[0m Jan 17 22:31:17.559: INFO: >>> kubeConfig: /root/.kube/config �[1mSTEP:�[0m Building a namespace api object, basename gc �[38;5;243m01/17/23 22:31:17.56�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/17/23 22:31:17.908�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/17/23 22:31:18.126�[0m [It] should orphan pods created by rc if delete options say so [Conformance] test/e2e/apimachinery/garbage_collector.go:370 �[1mSTEP:�[0m create the rc �[38;5;243m01/17/23 22:31:18.457�[0m �[1mSTEP:�[0m delete the rc �[38;5;243m01/17/23 22:31:23.682�[0m �[1mSTEP:�[0m wait for the rc to be deleted �[38;5;243m01/17/23 22:31:23.794�[0m �[1mSTEP:�[0m wait for 30 seconds to see if the garbage collector mistakenly deletes the pods �[38;5;243m01/17/23 22:31:28.904�[0m Jan 17 22:32:12.380: FAIL: Failed to list pods: Get "https://api.e2e-e2e-kops-grid-kubenet-flatcar-k25-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/gc-6716/pods": dial tcp 54.78.31.51:443: connect: connection refused - error from a previous attempt: unexpected EOF Full Stack Trace [AfterEach] [sig-api-machinery] Garbage collector test/e2e/framework/framework.go:187 �[1mSTEP:�[0m Collecting events from namespace "gc-6716". �[38;5;243m01/17/23 22:32:12.38�[0m Jan 17 22:32:12.500: INFO: Unexpected error: failed to list events in namespace "gc-6716": <*url.Error | 0xc001de5ce0>: { Op: "Get", URL: "https://api.e2e-e2e-kops-grid-kubenet-flatcar-k25-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/gc-6716/events", Err: <*net.OpError | 0xc001e74140>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc001c82a50>{IP: [54, 78, 31, 51], Port: 443, Zone: ""}, Err: <*os.SyscallError | 0xc001d83120>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Jan 17 22:32:12.500: FAIL: failed to list events in namespace "gc-6716": Get "https://api.e2e-e2e-kops-grid-kubenet-flatcar-k25-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/gc-6716/events": dial tcp 54.78.31.51:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework.dumpEventsInNamespace(0xc002f8f590, {0xc001b21729, 0x7}) test/e2e/framework/util.go:901 +0x191 k8s.io/kubernetes/test/e2e/framework.DumpAllNamespaceInfo({0x7ca2818, 0xc00057f980}, {0xc001b21729, 0x7}) test/e2e/framework/util.go:919 +0x8d k8s.io/kubernetes/test/e2e/framework.NewFramework.func1(0xc000cefce0, 0x1?) test/e2e/framework/framework.go:181 +0x8b k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc000cefce0) test/e2e/framework/framework.go:435 +0x21d �[1mSTEP:�[0m Destroying namespace "gc-6716" for this suite. �[38;5;243m01/17/23 22:32:12.5�[0m Jan 17 22:32:12.616: FAIL: Couldn't delete ns: "gc-6716": Delete "https://api.e2e-e2e-kops-grid-kubenet-flatcar-k25-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/gc-6716": dial tcp 54.78.31.51:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://api.e2e-e2e-kops-grid-kubenet-flatcar-k25-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/gc-6716", Err:(*net.OpError)(0xc001e745f0)}) Full Stack Trace panic({0x6ea2520, 0xc001e58500}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1() test/e2e/framework/ginkgowrapper/wrapper.go:73 +0x7d panic({0x6ea4740, 0xc000262460}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail({0xc00355cc30, 0xec}, {0xc002f8f048?, 0x735bfcc?, 0xc002f8f068?}) test/e2e/framework/ginkgowrapper/wrapper.go:77 +0x197 k8s.io/kubernetes/test/e2e/framework.Fail({0xc00008ca80, 0xd7}, {0xc002f8f0e0?, 0xc00349b130?, 0xc002f8f108?}) test/e2e/framework/log.go:63 +0x145 k8s.io/kubernetes/test/e2e/framework.ExpectNoErrorWithOffset(0x1, {0x7c34da0, 0xc001de5ce0}, {0xc001d83160?, 0x0?, 0x0?}) test/e2e/framework/expect.go:76 +0x267 k8s.io/kubernetes/test/e2e/framework.ExpectNoError(...) test/e2e/framework/expect.go:43 k8s.io/kubernetes/test/e2e/framework.dumpEventsInNamespace(0xc002f8f590, {0xc001b21729, 0x7}) test/e2e/framework/util.go:901 +0x191 k8s.io/kubernetes/test/e2e/framework.DumpAllNamespaceInfo({0x7ca2818, 0xc00057f980}, {0xc001b21729, 0x7}) test/e2e/framework/util.go:919 +0x8d k8s.io/kubernetes/test/e2e/framework.NewFramework.func1(0xc000cefce0, 0x1?) test/e2e/framework/framework.go:181 +0x8b k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc000cefce0) test/e2e/framework/framework.go:435 +0x21d
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-apps\]\sCronJob\sshould\snot\semit\sunexpected\swarnings$'
test/e2e/apps/cronjob.go:227 k8s.io/kubernetes/test/e2e/apps.glob..func2.6() test/e2e/apps/cronjob.go:227 +0x22efrom junit_01.xml
{"msg":"FAILED [sig-apps] CronJob should not emit unexpected warnings","completed":1,"skipped":2,"failed":1,"failures":["[sig-apps] CronJob should not emit unexpected warnings"]} [BeforeEach] [sig-apps] CronJob test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/17/23 22:31:07.784�[0m Jan 17 22:31:07.784: INFO: >>> kubeConfig: /root/.kube/config �[1mSTEP:�[0m Building a namespace api object, basename cronjob �[38;5;243m01/17/23 22:31:07.786�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/17/23 22:31:08.103�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/17/23 22:31:08.316�[0m [It] should not emit unexpected warnings test/e2e/apps/cronjob.go:218 �[1mSTEP:�[0m Creating a cronjob �[38;5;243m01/17/23 22:31:08.524�[0m �[1mSTEP:�[0m Ensuring at least two jobs and at least one finished job exists by listing jobs explicitly �[38;5;243m01/17/23 22:31:08.637�[0m Jan 17 22:32:12.346: INFO: Unexpected error: Failed to ensure at least two job exists in namespace cronjob-5865: <*rest.wrapPreviousError | 0xc002cf2f40>: { currentErr: <*url.Error | 0xc002cf6390>{ Op: "Get", URL: "https://api.e2e-e2e-kops-grid-kubenet-flatcar-k25-ko26.test-cncf-aws.k8s.io/apis/batch/v1/namespaces/cronjob-5865/jobs", Err: <*net.OpError | 0xc003b4eaa0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0024a3d40>{IP: [54, 78, 31, 51], Port: 443, Zone: ""}, Err: <*os.SyscallError | 0xc002cf2ee0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, }, previousError: <*errors.errorString | 0xc000110100>{s: "unexpected EOF"}, } Jan 17 22:32:12.346: FAIL: Failed to ensure at least two job exists in namespace cronjob-5865: Get "https://api.e2e-e2e-kops-grid-kubenet-flatcar-k25-ko26.test-cncf-aws.k8s.io/apis/batch/v1/namespaces/cronjob-5865/jobs": dial tcp 54.78.31.51:443: connect: connection refused - error from a previous attempt: unexpected EOF Full Stack Trace k8s.io/kubernetes/test/e2e/apps.glob..func2.6() test/e2e/apps/cronjob.go:227 +0x22e [AfterEach] [sig-apps] CronJob test/e2e/framework/framework.go:187 �[1mSTEP:�[0m Collecting events from namespace "cronjob-5865". �[38;5;243m01/17/23 22:32:12.347�[0m Jan 17 22:32:12.472: INFO: Unexpected error: failed to list events in namespace "cronjob-5865": <*url.Error | 0xc0024a3e90>: { Op: "Get", URL: "https://api.e2e-e2e-kops-grid-kubenet-flatcar-k25-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/cronjob-5865/events", Err: <*net.OpError | 0xc0039eb810>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc002cf7080>{IP: [54, 78, 31, 51], Port: 443, Zone: ""}, Err: <*os.SyscallError | 0xc003a36ee0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Jan 17 22:32:12.472: FAIL: failed to list events in namespace "cronjob-5865": Get "https://api.e2e-e2e-kops-grid-kubenet-flatcar-k25-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/cronjob-5865/events": dial tcp 54.78.31.51:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework.dumpEventsInNamespace(0xc001d17590, {0xc0024da0b0, 0xc}) test/e2e/framework/util.go:901 +0x191 k8s.io/kubernetes/test/e2e/framework.DumpAllNamespaceInfo({0x7ca2818, 0xc003a18180}, {0xc0024da0b0, 0xc}) test/e2e/framework/util.go:919 +0x8d k8s.io/kubernetes/test/e2e/framework.NewFramework.func1(0xc00091dce0, 0x1?) test/e2e/framework/framework.go:181 +0x8b k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc00091dce0) test/e2e/framework/framework.go:435 +0x21d �[1mSTEP:�[0m Destroying namespace "cronjob-5865" for this suite. �[38;5;243m01/17/23 22:32:12.473�[0m Jan 17 22:32:12.592: FAIL: Couldn't delete ns: "cronjob-5865": Delete "https://api.e2e-e2e-kops-grid-kubenet-flatcar-k25-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/cronjob-5865": dial tcp 54.78.31.51:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://api.e2e-e2e-kops-grid-kubenet-flatcar-k25-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/cronjob-5865", Err:(*net.OpError)(0xc003b4f220)}) Full Stack Trace panic({0x6ea2520, 0xc0024a0d80}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1() test/e2e/framework/ginkgowrapper/wrapper.go:73 +0x7d panic({0x6ea4740, 0xc000c01730}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail({0xc003348c00, 0xf6}, {0xc001d17048?, 0x735bfcc?, 0xc001d17068?}) test/e2e/framework/ginkgowrapper/wrapper.go:77 +0x197 k8s.io/kubernetes/test/e2e/framework.Fail({0xc0003b8690, 0xe1}, {0xc001d170e0?, 0xc002cf0420?, 0xc001d17108?}) test/e2e/framework/log.go:63 +0x145 k8s.io/kubernetes/test/e2e/framework.ExpectNoErrorWithOffset(0x1, {0x7c34da0, 0xc0024a3e90}, {0xc003a36f20?, 0x0?, 0x0?}) test/e2e/framework/expect.go:76 +0x267 k8s.io/kubernetes/test/e2e/framework.ExpectNoError(...) test/e2e/framework/expect.go:43 k8s.io/kubernetes/test/e2e/framework.dumpEventsInNamespace(0xc001d17590, {0xc0024da0b0, 0xc}) test/e2e/framework/util.go:901 +0x191 k8s.io/kubernetes/test/e2e/framework.DumpAllNamespaceInfo({0x7ca2818, 0xc003a18180}, {0xc0024da0b0, 0xc}) test/e2e/framework/util.go:919 +0x8d k8s.io/kubernetes/test/e2e/framework.NewFramework.func1(0xc00091dce0, 0x1?) test/e2e/framework/framework.go:181 +0x8b k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc00091dce0) test/e2e/framework/framework.go:435 +0x21d
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-apps\]\sJob\sshould\srun\sa\sjob\sto\scompletion\swhen\stasks\ssucceed$'
test/e2e/apps/job.go:89 k8s.io/kubernetes/test/e2e/apps.glob..func7.1() test/e2e/apps/job.go:89 +0x271from junit_01.xml
{"msg":"FAILED [sig-apps] Job should run a job to completion when tasks succeed","completed":2,"skipped":15,"failed":1,"failures":["[sig-apps] Job should run a job to completion when tasks succeed"]} [BeforeEach] [sig-apps] Job test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/17/23 22:31:29.055�[0m Jan 17 22:31:29.055: INFO: >>> kubeConfig: /root/.kube/config �[1mSTEP:�[0m Building a namespace api object, basename job �[38;5;243m01/17/23 22:31:29.056�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/17/23 22:31:29.374�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/17/23 22:31:29.586�[0m [It] should run a job to completion when tasks succeed test/e2e/apps/job.go:81 �[1mSTEP:�[0m Creating a job �[38;5;243m01/17/23 22:31:29.796�[0m �[1mSTEP:�[0m Ensuring job reaches completions �[38;5;243m01/17/23 22:31:29.911�[0m Jan 17 22:32:12.361: INFO: Unexpected error: failed to ensure job completion in namespace: job-8147: <*rest.wrapPreviousError | 0xc002003720>: { currentErr: <*url.Error | 0xc0021989c0>{ Op: "Get", URL: "https://api.e2e-e2e-kops-grid-kubenet-flatcar-k25-ko26.test-cncf-aws.k8s.io/apis/batch/v1/namespaces/job-8147/jobs/all-succeed", Err: <*net.OpError | 0xc002070f50>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc001e8b620>{IP: [54, 78, 31, 51], Port: 443, Zone: ""}, Err: <*os.SyscallError | 0xc0020036e0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, }, previousError: <*errors.errorString | 0xc000110100>{s: "unexpected EOF"}, } Jan 17 22:32:12.361: FAIL: failed to ensure job completion in namespace: job-8147: Get "https://api.e2e-e2e-kops-grid-kubenet-flatcar-k25-ko26.test-cncf-aws.k8s.io/apis/batch/v1/namespaces/job-8147/jobs/all-succeed": dial tcp 54.78.31.51:443: connect: connection refused - error from a previous attempt: unexpected EOF Full Stack Trace k8s.io/kubernetes/test/e2e/apps.glob..func7.1() test/e2e/apps/job.go:89 +0x271 [AfterEach] [sig-apps] Job test/e2e/framework/framework.go:187 �[1mSTEP:�[0m Collecting events from namespace "job-8147". �[38;5;243m01/17/23 22:32:12.361�[0m Jan 17 22:32:12.480: INFO: Unexpected error: failed to list events in namespace "job-8147": <*url.Error | 0xc001745290>: { Op: "Get", URL: "https://api.e2e-e2e-kops-grid-kubenet-flatcar-k25-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/job-8147/events", Err: <*net.OpError | 0xc003429ef0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0021992f0>{IP: [54, 78, 31, 51], Port: 443, Zone: ""}, Err: <*os.SyscallError | 0xc0009d15a0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Jan 17 22:32:12.480: FAIL: failed to list events in namespace "job-8147": Get "https://api.e2e-e2e-kops-grid-kubenet-flatcar-k25-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/job-8147/events": dial tcp 54.78.31.51:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework.dumpEventsInNamespace(0xc003369590, {0xc001c74390, 0x8}) test/e2e/framework/util.go:901 +0x191 k8s.io/kubernetes/test/e2e/framework.DumpAllNamespaceInfo({0x7ca2818, 0xc000991380}, {0xc001c74390, 0x8}) test/e2e/framework/util.go:919 +0x8d k8s.io/kubernetes/test/e2e/framework.NewFramework.func1(0xc000b162c0, 0x1?) test/e2e/framework/framework.go:181 +0x8b k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc000b162c0) test/e2e/framework/framework.go:435 +0x21d �[1mSTEP:�[0m Destroying namespace "job-8147" for this suite. �[38;5;243m01/17/23 22:32:12.48�[0m Jan 17 22:32:12.598: FAIL: Couldn't delete ns: "job-8147": Delete "https://api.e2e-e2e-kops-grid-kubenet-flatcar-k25-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/job-8147": dial tcp 54.78.31.51:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://api.e2e-e2e-kops-grid-kubenet-flatcar-k25-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/job-8147", Err:(*net.OpError)(0xc00351fc70)}) Full Stack Trace panic({0x6ea2520, 0xc0002954c0}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1() test/e2e/framework/ginkgowrapper/wrapper.go:73 +0x7d panic({0x6ea4740, 0xc000397730}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail({0xc0016ae5a0, 0xee}, {0xc003369048?, 0x735bfcc?, 0xc003369068?}) test/e2e/framework/ginkgowrapper/wrapper.go:77 +0x197 k8s.io/kubernetes/test/e2e/framework.Fail({0xc0017320e0, 0xd9}, {0xc0033690e0?, 0xc0034e4e70?, 0xc003369108?}) test/e2e/framework/log.go:63 +0x145 k8s.io/kubernetes/test/e2e/framework.ExpectNoErrorWithOffset(0x1, {0x7c34da0, 0xc001745290}, {0xc0009d15e0?, 0x0?, 0x0?}) test/e2e/framework/expect.go:76 +0x267 k8s.io/kubernetes/test/e2e/framework.ExpectNoError(...) test/e2e/framework/expect.go:43 k8s.io/kubernetes/test/e2e/framework.dumpEventsInNamespace(0xc003369590, {0xc001c74390, 0x8}) test/e2e/framework/util.go:901 +0x191 k8s.io/kubernetes/test/e2e/framework.DumpAllNamespaceInfo({0x7ca2818, 0xc000991380}, {0xc001c74390, 0x8}) test/e2e/framework/util.go:919 +0x8d k8s.io/kubernetes/test/e2e/framework.NewFramework.func1(0xc000b162c0, 0x1?) test/e2e/framework/framework.go:181 +0x8b k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc000b162c0) test/e2e/framework/framework.go:435 +0x21d
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-apps\]\sStatefulSet\sBasic\sStatefulSet\sfunctionality\s\[StatefulSetBasic\]\sshould\slist\,\spatch\sand\sdelete\sa\scollection\sof\sStatefulSets\s\[Conformance\]$'
test/e2e/framework/statefulset/rest.go:68 k8s.io/kubernetes/test/e2e/framework/statefulset.GetPodList({0x7ca2818, 0xc003c80900}, 0xc001bce500) test/e2e/framework/statefulset/rest.go:68 +0x153 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning.func1() test/e2e/framework/statefulset/wait.go:37 +0x4a k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2681cf1, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 +0x1b k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7c66b28?, 0xc0001ac000?}, 0xc003b41960?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 +0x57 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7c66b28, 0xc0001ac000}, 0xc002aebea8, 0x2ef28aa?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 +0x10c k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7c66b28, 0xc0001ac000}, 0xd8?, 0x2ef1445?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 +0x9a k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7c66b28, 0xc0001ac000}, 0xc001bce500?, 0xc002a1fd28?, 0x2568967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 +0x4a k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x0?, 0xc003db4090?, 0x23?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 +0x50 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning({0x7ca2818?, 0xc003c80900}, 0x1, 0x1, 0xc001bce500) test/e2e/framework/statefulset/wait.go:35 +0xbd k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunningAndReady(...) test/e2e/framework/statefulset/wait.go:80 k8s.io/kubernetes/test/e2e/apps.glob..func10.2.14() test/e2e/apps/statefulset.go:922 +0x377from junit_01.xml
E0117 22:32:12.347920 6581 runtime.go:79] Observed a panic: ginkgowrapper.FailurePanic{Message:"Jan 17 22:32:12.347: Get \"https://api.e2e-e2e-kops-grid-kubenet-flatcar-k25-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/statefulset-8182/pods?labelSelector=name%3Dsample-pod%2Cpod%3Dhttpd\": dial tcp 54.78.31.51:443: connect: connection refused - error from a previous attempt: unexpected EOF", Filename:"test/e2e/framework/statefulset/rest.go", Line:68, FullStackTrace:"k8s.io/kubernetes/test/e2e/framework/statefulset.GetPodList({0x7ca2818, 0xc003c80900}, 0xc001bce500)\n\ttest/e2e/framework/statefulset/rest.go:68 +0x153\nk8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning.func1()\n\ttest/e2e/framework/statefulset/wait.go:37 +0x4a\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2681cf1, 0x0})\n\tvendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 +0x1b\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7c66b28?, 0xc0001ac000?}, 0xc003b41960?)\n\tvendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 +0x57\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7c66b28, 0xc0001ac000}, 0xc002aebea8, 0x2ef28aa?)\n\tvendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 +0x10c\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7c66b28, 0xc0001ac000}, 0xd8?, 0x2ef1445?, 0x20?)\n\tvendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 +0x9a\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7c66b28, 0xc0001ac000}, 0xc001bce500?, 0xc002a1fd28?, 0x2568967?)\n\tvendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 +0x4a\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x0?, 0xc003db4090?, 0x23?)\n\tvendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 +0x50\nk8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning({0x7ca2818?, 0xc003c80900}, 0x1, 0x1, 0xc001bce500)\n\ttest/e2e/framework/statefulset/wait.go:35 +0xbd\nk8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunningAndReady(...)\n\ttest/e2e/framework/statefulset/wait.go:80\nk8s.io/kubernetes/test/e2e/apps.glob..func10.2.14()\n\ttest/e2e/apps/statefulset.go:922 +0x377"} ( Your test failed. Ginkgo panics to prevent subsequent assertions from running. Normally Ginkgo rescues this panic so you shouldn't see it. But, if you make an assertion in a goroutine, Ginkgo can't capture the panic. To circumvent this, you should call defer GinkgoRecover() at the top of the goroutine that caused this panic. ) goroutine 467 [running]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.logPanic({0x6ea2520?, 0xc003d343c0}) vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:75 +0x99 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash({0x0, 0x0, 0xfffffffe?}) vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:49 +0x75 panic({0x6ea2520, 0xc003d343c0}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1() test/e2e/framework/ginkgowrapper/wrapper.go:73 +0x7d panic({0x6ea4740, 0xc000ba60e0}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2.Fail({0xc003d0d2c0, 0x123}, {0xc002c2d3d8?, 0xc002c2d3e8?, 0x0?}) vendor/github.com/onsi/ginkgo/v2/core_dsl.go:335 +0x225 k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail({0xc003d0d2c0, 0x123}, {0xc002c2d4b8?, 0x735bfcc?, 0xc002c2d4d8?}) test/e2e/framework/ginkgowrapper/wrapper.go:77 +0x197 k8s.io/kubernetes/test/e2e/framework.Fail({0xc003812b40, 0x10e}, {0xc002c2d550?, 0xc003812b40?, 0xc002c2d578?}) test/e2e/framework/log.go:63 +0x145 k8s.io/kubernetes/test/e2e/framework.ExpectNoErrorWithOffset(0x1, {0x7c33ce0, 0xc003b31a00}, {0x0?, 0xc003b01580?, 0x19?}) test/e2e/framework/expect.go:76 +0x267 k8s.io/kubernetes/test/e2e/framework.ExpectNoError(...) test/e2e/framework/expect.go:43 k8s.io/kubernetes/test/e2e/framework/statefulset.GetPodList({0x7ca2818, 0xc003c80900}, 0xc001bce500) test/e2e/framework/statefulset/rest.go:68 +0x153 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning.func1() test/e2e/framework/statefulset/wait.go:37 +0x4a k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2681cf1, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 +0x1b k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7c66b28?, 0xc0001ac000?}, 0xc003b41960?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 +0x57 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7c66b28, 0xc0001ac000}, 0xc002aebea8, 0x2ef28aa?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 +0x10c k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7c66b28, 0xc0001ac000}, 0xd8?, 0x2ef1445?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 +0x9a k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7c66b28, 0xc0001ac000}, 0xc001bce500?, 0xc002a1fd28?, 0x2568967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 +0x4a k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x0?, 0xc003db4090?, 0x23?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 +0x50 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning({0x7ca2818?, 0xc003c80900}, 0x1, 0x1, 0xc001bce500) test/e2e/framework/statefulset/wait.go:35 +0xbd k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunningAndReady(...) test/e2e/framework/statefulset/wait.go:80 k8s.io/kubernetes/test/e2e/apps.glob..func10.2.14() test/e2e/apps/statefulset.go:922 +0x377 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:605 +0x8d created by k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:593 +0x60c {"msg":"FAILED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should list, patch and delete a collection of StatefulSets [Conformance]","completed":2,"skipped":27,"failed":1,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should list, patch and delete a collection of StatefulSets [Conformance]"]} [BeforeEach] [sig-apps] StatefulSet test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/17/23 22:31:34.17�[0m Jan 17 22:31:34.170: INFO: >>> kubeConfig: /root/.kube/config �[1mSTEP:�[0m Building a namespace api object, basename statefulset �[38;5;243m01/17/23 22:31:34.171�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/17/23 22:31:34.494�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/17/23 22:31:34.706�[0m [BeforeEach] [sig-apps] StatefulSet test/e2e/apps/statefulset.go:96 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] test/e2e/apps/statefulset.go:111 �[1mSTEP:�[0m Creating service test in namespace statefulset-8182 �[38;5;243m01/17/23 22:31:34.92�[0m [It] should list, patch and delete a collection of StatefulSets [Conformance] test/e2e/apps/statefulset.go:906 Jan 17 22:31:35.250: INFO: Waiting for pod test-ss-0 to enter Running - Ready=true, currently Pending - Ready=false Jan 17 22:31:45.358: INFO: Waiting for pod test-ss-0 to enter Running - Ready=true, currently Pending - Ready=false Jan 17 22:32:12.347: INFO: Unexpected error: <*rest.wrapPreviousError | 0xc003b31a00>: { currentErr: <*url.Error | 0xc003d21530>{ Op: "Get", URL: "https://api.e2e-e2e-kops-grid-kubenet-flatcar-k25-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/statefulset-8182/pods?labelSelector=name%3Dsample-pod%2Cpod%3Dhttpd", Err: <*net.OpError | 0xc003d32730>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc003c18f30>{IP: [54, 78, 31, 51], Port: 443, Zone: ""}, Err: <*os.SyscallError | 0xc003b319c0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, }, previousError: <*errors.errorString | 0xc000192100>{s: "unexpected EOF"}, } Jan 17 22:32:12.347: FAIL: Get "https://api.e2e-e2e-kops-grid-kubenet-flatcar-k25-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/statefulset-8182/pods?labelSelector=name%3Dsample-pod%2Cpod%3Dhttpd": dial tcp 54.78.31.51:443: connect: connection refused - error from a previous attempt: unexpected EOF Full Stack Trace k8s.io/kubernetes/test/e2e/framework/statefulset.GetPodList({0x7ca2818, 0xc003c80900}, 0xc001bce500) test/e2e/framework/statefulset/rest.go:68 +0x153 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning.func1() test/e2e/framework/statefulset/wait.go:37 +0x4a k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2681cf1, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 +0x1b k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7c66b28?, 0xc0001ac000?}, 0xc003b41960?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 +0x57 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7c66b28, 0xc0001ac000}, 0xc002aebea8, 0x2ef28aa?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 +0x10c k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7c66b28, 0xc0001ac000}, 0xd8?, 0x2ef1445?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 +0x9a k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7c66b28, 0xc0001ac000}, 0xc001bce500?, 0xc002a1fd28?, 0x2568967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 +0x4a k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x0?, 0xc003db4090?, 0x23?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 +0x50 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning({0x7ca2818?, 0xc003c80900}, 0x1, 0x1, 0xc001bce500) test/e2e/framework/statefulset/wait.go:35 +0xbd k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunningAndReady(...) test/e2e/framework/statefulset/wait.go:80 k8s.io/kubernetes/test/e2e/apps.glob..func10.2.14() test/e2e/apps/statefulset.go:922 +0x377 E0117 22:32:12.347920 6581 runtime.go:79] Observed a panic: ginkgowrapper.FailurePanic{Message:"Jan 17 22:32:12.347: Get \"https://api.e2e-e2e-kops-grid-kubenet-flatcar-k25-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/statefulset-8182/pods?labelSelector=name%3Dsample-pod%2Cpod%3Dhttpd\": dial tcp 54.78.31.51:443: connect: connection refused - error from a previous attempt: unexpected EOF", Filename:"test/e2e/framework/statefulset/rest.go", Line:68, FullStackTrace:"k8s.io/kubernetes/test/e2e/framework/statefulset.GetPodList({0x7ca2818, 0xc003c80900}, 0xc001bce500)\n\ttest/e2e/framework/statefulset/rest.go:68 +0x153\nk8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning.func1()\n\ttest/e2e/framework/statefulset/wait.go:37 +0x4a\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2681cf1, 0x0})\n\tvendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 +0x1b\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7c66b28?, 0xc0001ac000?}, 0xc003b41960?)\n\tvendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 +0x57\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7c66b28, 0xc0001ac000}, 0xc002aebea8, 0x2ef28aa?)\n\tvendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 +0x10c\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7c66b28, 0xc0001ac000}, 0xd8?, 0x2ef1445?, 0x20?)\n\tvendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 +0x9a\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7c66b28, 0xc0001ac000}, 0xc001bce500?, 0xc002a1fd28?, 0x2568967?)\n\tvendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 +0x4a\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x0?, 0xc003db4090?, 0x23?)\n\tvendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 +0x50\nk8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning({0x7ca2818?, 0xc003c80900}, 0x1, 0x1, 0xc001bce500)\n\ttest/e2e/framework/statefulset/wait.go:35 +0xbd\nk8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunningAndReady(...)\n\ttest/e2e/framework/statefulset/wait.go:80\nk8s.io/kubernetes/test/e2e/apps.glob..func10.2.14()\n\ttest/e2e/apps/statefulset.go:922 +0x377"} ( Your test failed. Ginkgo panics to prevent subsequent assertions from running. Normally Ginkgo rescues this panic so you shouldn't see it. But, if you make an assertion in a goroutine, Ginkgo can't capture the panic. To circumvent this, you should call defer GinkgoRecover() at the top of the goroutine that caused this panic. ) goroutine 467 [running]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.logPanic({0x6ea2520?, 0xc003d343c0}) vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:75 +0x99 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash({0x0, 0x0, 0xfffffffe?}) vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:49 +0x75 panic({0x6ea2520, 0xc003d343c0}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1() test/e2e/framework/ginkgowrapper/wrapper.go:73 +0x7d panic({0x6ea4740, 0xc000ba60e0}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2.Fail({0xc003d0d2c0, 0x123}, {0xc002c2d3d8?, 0xc002c2d3e8?, 0x0?}) vendor/github.com/onsi/ginkgo/v2/core_dsl.go:335 +0x225 k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail({0xc003d0d2c0, 0x123}, {0xc002c2d4b8?, 0x735bfcc?, 0xc002c2d4d8?}) test/e2e/framework/ginkgowrapper/wrapper.go:77 +0x197 k8s.io/kubernetes/test/e2e/framework.Fail({0xc003812b40, 0x10e}, {0xc002c2d550?, 0xc003812b40?, 0xc002c2d578?}) test/e2e/framework/log.go:63 +0x145 k8s.io/kubernetes/test/e2e/framework.ExpectNoErrorWithOffset(0x1, {0x7c33ce0, 0xc003b31a00}, {0x0?, 0xc003b01580?, 0x19?}) test/e2e/framework/expect.go:76 +0x267 k8s.io/kubernetes/test/e2e/framework.ExpectNoError(...) test/e2e/framework/expect.go:43 k8s.io/kubernetes/test/e2e/framework/statefulset.GetPodList({0x7ca2818, 0xc003c80900}, 0xc001bce500) test/e2e/framework/statefulset/rest.go:68 +0x153 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning.func1() test/e2e/framework/statefulset/wait.go:37 +0x4a k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2681cf1, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 +0x1b k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7c66b28?, 0xc0001ac000?}, 0xc003b41960?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 +0x57 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7c66b28, 0xc0001ac000}, 0xc002aebea8, 0x2ef28aa?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 +0x10c k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7c66b28, 0xc0001ac000}, 0xd8?, 0x2ef1445?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 +0x9a k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7c66b28, 0xc0001ac000}, 0xc001bce500?, 0xc002a1fd28?, 0x2568967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 +0x4a k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x0?, 0xc003db4090?, 0x23?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 +0x50 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning({0x7ca2818?, 0xc003c80900}, 0x1, 0x1, 0xc001bce500) test/e2e/framework/statefulset/wait.go:35 +0xbd k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunningAndReady(...) test/e2e/framework/statefulset/wait.go:80 k8s.io/kubernetes/test/e2e/apps.glob..func10.2.14() test/e2e/apps/statefulset.go:922 +0x377 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:605 +0x8d created by k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:593 +0x60c [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] test/e2e/apps/statefulset.go:122 Jan 17 22:32:12.474: INFO: Deleting all statefulset in ns statefulset-8182 Jan 17 22:32:12.590: INFO: Unexpected error: <*url.Error | 0xc003da43c0>: { Op: "Get", URL: "https://api.e2e-e2e-kops-grid-kubenet-flatcar-k25-ko26.test-cncf-aws.k8s.io/apis/apps/v1/namespaces/statefulset-8182/statefulsets", Err: <*net.OpError | 0xc003bba050>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc003c18b10>{IP: [54, 78, 31, 51], Port: 443, Zone: ""}, Err: <*os.SyscallError | 0xc003d9c020>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Jan 17 22:32:12.590: FAIL: Get "https://api.e2e-e2e-kops-grid-kubenet-flatcar-k25-ko26.test-cncf-aws.k8s.io/apis/apps/v1/namespaces/statefulset-8182/statefulsets": dial tcp 54.78.31.51:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework/statefulset.DeleteAllStatefulSets({0x7ca2818, 0xc003c80900}, {0xc003d9a650, 0x10}) test/e2e/framework/statefulset/rest.go:75 +0x133 k8s.io/kubernetes/test/e2e/apps.glob..func10.2.2() test/e2e/apps/statefulset.go:127 +0x1b2 [AfterEach] [sig-apps] StatefulSet test/e2e/framework/framework.go:187 �[1mSTEP:�[0m Collecting events from namespace "statefulset-8182". �[38;5;243m01/17/23 22:32:12.591�[0m Jan 17 22:32:12.712: INFO: Unexpected error: failed to list events in namespace "statefulset-8182": <*url.Error | 0xc0009ae900>: { Op: "Get", URL: "https://api.e2e-e2e-kops-grid-kubenet-flatcar-k25-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/statefulset-8182/events", Err: <*net.OpError | 0xc003c34370>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc003c19020>{IP: [54, 78, 31, 51], Port: 443, Zone: ""}, Err: <*os.SyscallError | 0xc003b30000>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Jan 17 22:32:12.713: FAIL: failed to list events in namespace "statefulset-8182": Get "https://api.e2e-e2e-kops-grid-kubenet-flatcar-k25-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/statefulset-8182/events": dial tcp 54.78.31.51:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework.dumpEventsInNamespace(0xc003dbd590, {0xc003d9a650, 0x10}) test/e2e/framework/util.go:901 +0x191 k8s.io/kubernetes/test/e2e/framework.DumpAllNamespaceInfo({0x7ca2818, 0xc003c80900}, {0xc003d9a650, 0x10}) test/e2e/framework/util.go:919 +0x8d k8s.io/kubernetes/test/e2e/framework.NewFramework.func1(0xc000dbb760, 0x2?) test/e2e/framework/framework.go:181 +0x8b k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc000dbb760) test/e2e/framework/framework.go:435 +0x21d �[1mSTEP:�[0m Destroying namespace "statefulset-8182" for this suite. �[38;5;243m01/17/23 22:32:12.713�[0m Jan 17 22:32:12.831: FAIL: Couldn't delete ns: "statefulset-8182": Delete "https://api.e2e-e2e-kops-grid-kubenet-flatcar-k25-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/statefulset-8182": dial tcp 54.78.31.51:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://api.e2e-e2e-kops-grid-kubenet-flatcar-k25-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/statefulset-8182", Err:(*net.OpError)(0xc0009ba820)}) Full Stack Trace panic({0x6ea2520, 0xc0009a4480}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1() test/e2e/framework/ginkgowrapper/wrapper.go:73 +0x7d panic({0x6ea4740, 0xc000b96770}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail({0xc0009b1300, 0xfe}, {0xc003dbd048?, 0x735bfcc?, 0xc003dbd068?}) test/e2e/framework/ginkgowrapper/wrapper.go:77 +0x197 k8s.io/kubernetes/test/e2e/framework.Fail({0xc0001dc4b0, 0xe9}, {0xc003dbd0e0?, 0xc003d1db00?, 0xc003dbd108?}) test/e2e/framework/log.go:63 +0x145 k8s.io/kubernetes/test/e2e/framework.ExpectNoErrorWithOffset(0x1, {0x7c34da0, 0xc0009ae900}, {0xc003b30040?, 0x0?, 0x0?}) test/e2e/framework/expect.go:76 +0x267 k8s.io/kubernetes/test/e2e/framework.ExpectNoError(...) test/e2e/framework/expect.go:43 k8s.io/kubernetes/test/e2e/framework.dumpEventsInNamespace(0xc003dbd590, {0xc003d9a650, 0x10}) test/e2e/framework/util.go:901 +0x191 k8s.io/kubernetes/test/e2e/framework.DumpAllNamespaceInfo({0x7ca2818, 0xc003c80900}, {0xc003d9a650, 0x10}) test/e2e/framework/util.go:919 +0x8d k8s.io/kubernetes/test/e2e/framework.NewFramework.func1(0xc000dbb760, 0x2?) test/e2e/framework/framework.go:181 +0x8b k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc000dbb760) test/e2e/framework/framework.go:435 +0x21d
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-network\]\sNetworking\sGranular\sChecks\:\sPods\sshould\sfunction\sfor\snode\-pod\scommunication\:\shttp\s\[LinuxOnly\]\s\[NodeConformance\]\s\[Conformance\]$'
test/e2e/framework/network/utils.go:725 k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).createTestPods(0xc000f460e0) test/e2e/framework/network/utils.go:725 +0x13e k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setupCore(0xc000f460e0, 0x47?) test/e2e/framework/network/utils.go:764 +0x9f k8s.io/kubernetes/test/e2e/framework/network.NewCoreNetworkingTestConfig(0xc000cac000, 0x1) test/e2e/framework/network/utils.go:142 +0xfb k8s.io/kubernetes/test/e2e/common/network.glob..func1.1.4() test/e2e/common/network/networking.go:106 +0x34from junit_01.xml
{"msg":"FAILED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","completed":1,"skipped":3,"failed":1,"failures":["[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]"]} [BeforeEach] [sig-network] Networking test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/17/23 22:31:15.733�[0m Jan 17 22:31:15.733: INFO: >>> kubeConfig: /root/.kube/config �[1mSTEP:�[0m Building a namespace api object, basename pod-network-test �[38;5;243m01/17/23 22:31:15.734�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/17/23 22:31:16.06�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/17/23 22:31:16.269�[0m [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] test/e2e/common/network/networking.go:105 �[1mSTEP:�[0m Performing setup for networking test in namespace pod-network-test-8288 �[38;5;243m01/17/23 22:31:16.481�[0m �[1mSTEP:�[0m creating a selector �[38;5;243m01/17/23 22:31:16.481�[0m �[1mSTEP:�[0m Creating the service pods in kubernetes �[38;5;243m01/17/23 22:31:16.481�[0m Jan 17 22:31:16.481: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Jan 17 22:31:17.145: INFO: Waiting up to 5m0s for pod "netserver-0" in namespace "pod-network-test-8288" to be "running and ready" Jan 17 22:31:17.254: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 109.358885ms Jan 17 22:31:17.254: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 17 22:31:19.393: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.248324223s Jan 17 22:31:19.393: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 17 22:31:21.370: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 4.224717608s Jan 17 22:31:21.370: INFO: The phase of Pod netserver-0 is Running (Ready = false) Jan 17 22:31:23.361: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 6.215668653s Jan 17 22:31:23.361: INFO: The phase of Pod netserver-0 is Running (Ready = false) Jan 17 22:31:25.361: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 8.215559751s Jan 17 22:31:25.361: INFO: The phase of Pod netserver-0 is Running (Ready = false) Jan 17 22:31:27.361: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 10.215660998s Jan 17 22:31:27.361: INFO: The phase of Pod netserver-0 is Running (Ready = false) Jan 17 22:31:29.362: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 12.216507993s Jan 17 22:31:29.362: INFO: The phase of Pod netserver-0 is Running (Ready = false) Jan 17 22:31:31.363: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=true. Elapsed: 14.217494757s Jan 17 22:31:31.363: INFO: The phase of Pod netserver-0 is Running (Ready = true) Jan 17 22:31:31.363: INFO: Pod "netserver-0" satisfied condition "running and ready" Jan 17 22:31:31.468: INFO: Waiting up to 5m0s for pod "netserver-1" in namespace "pod-network-test-8288" to be "running and ready" Jan 17 22:31:31.574: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 105.629834ms Jan 17 22:31:31.574: INFO: The phase of Pod netserver-1 is Running (Ready = false) Jan 17 22:31:33.681: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 2.212482712s Jan 17 22:31:33.681: INFO: The phase of Pod netserver-1 is Running (Ready = false) Jan 17 22:31:35.680: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 4.211932683s Jan 17 22:31:35.680: INFO: The phase of Pod netserver-1 is Running (Ready = false) Jan 17 22:31:37.682: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=false. Elapsed: 6.213359072s Jan 17 22:31:37.682: INFO: The phase of Pod netserver-1 is Running (Ready = false) Jan 17 22:31:39.680: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=true. Elapsed: 8.212103358s Jan 17 22:31:39.680: INFO: The phase of Pod netserver-1 is Running (Ready = true) Jan 17 22:31:39.680: INFO: Pod "netserver-1" satisfied condition "running and ready" Jan 17 22:31:39.787: INFO: Waiting up to 5m0s for pod "netserver-2" in namespace "pod-network-test-8288" to be "running and ready" Jan 17 22:31:39.894: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 106.55776ms Jan 17 22:31:39.894: INFO: The phase of Pod netserver-2 is Running (Ready = false) Jan 17 22:31:42.003: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 2.21541025s Jan 17 22:31:42.003: INFO: The phase of Pod netserver-2 is Running (Ready = false) Jan 17 22:31:44.001: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 4.213157334s Jan 17 22:31:44.001: INFO: The phase of Pod netserver-2 is Running (Ready = false) Jan 17 22:31:46.003: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 6.215711566s Jan 17 22:31:46.003: INFO: The phase of Pod netserver-2 is Running (Ready = false) Jan 17 22:31:48.000: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=false. Elapsed: 8.212480203s Jan 17 22:31:48.000: INFO: The phase of Pod netserver-2 is Running (Ready = false) Jan 17 22:31:50.005: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=true. Elapsed: 10.21771304s Jan 17 22:31:50.005: INFO: The phase of Pod netserver-2 is Running (Ready = true) Jan 17 22:31:50.005: INFO: Pod "netserver-2" satisfied condition "running and ready" Jan 17 22:31:50.111: INFO: Waiting up to 5m0s for pod "netserver-3" in namespace "pod-network-test-8288" to be "running and ready" Jan 17 22:31:50.216: INFO: Pod "netserver-3": Phase="Running", Reason="", readiness=true. Elapsed: 105.400261ms Jan 17 22:31:50.216: INFO: The phase of Pod netserver-3 is Running (Ready = true) Jan 17 22:31:50.216: INFO: Pod "netserver-3" satisfied condition "running and ready" �[1mSTEP:�[0m Creating test pods �[38;5;243m01/17/23 22:31:50.321�[0m Jan 17 22:31:50.540: INFO: Waiting up to 5m0s for pod "test-container-pod" in namespace "pod-network-test-8288" to be "running" Jan 17 22:31:50.650: INFO: Pod "test-container-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 110.389623ms Jan 17 22:32:12.366: INFO: Encountered non-retryable error while getting pod pod-network-test-8288/test-container-pod: Get "https://api.e2e-e2e-kops-grid-kubenet-flatcar-k25-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/pod-network-test-8288/pods/test-container-pod": dial tcp 54.78.31.51:443: connect: connection refused - error from a previous attempt: unexpected EOF Jan 17 22:32:12.366: INFO: Unexpected error: <*fmt.wrapError | 0xc001a145e0>: { msg: "error while waiting for pod pod-network-test-8288/test-container-pod to be running: Get \"https://api.e2e-e2e-kops-grid-kubenet-flatcar-k25-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/pod-network-test-8288/pods/test-container-pod\": dial tcp 54.78.31.51:443: connect: connection refused - error from a previous attempt: unexpected EOF", err: <*rest.wrapPreviousError | 0xc001a145c0>{ currentErr: <*url.Error | 0xc0018a3c20>{ Op: "Get", URL: "https://api.e2e-e2e-kops-grid-kubenet-flatcar-k25-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/pod-network-test-8288/pods/test-container-pod", Err: <*net.OpError | 0xc001a30c80>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc00196a600>{IP: [54, 78, 31, 51], Port: 443, Zone: ""}, Err: <*os.SyscallError | 0xc001a14580>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, }, previousError: <*errors.errorString | 0xc000118100>{s: "unexpected EOF"}, }, } Jan 17 22:32:12.366: FAIL: error while waiting for pod pod-network-test-8288/test-container-pod to be running: Get "https://api.e2e-e2e-kops-grid-kubenet-flatcar-k25-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/pod-network-test-8288/pods/test-container-pod": dial tcp 54.78.31.51:443: connect: connection refused - error from a previous attempt: unexpected EOF Full Stack Trace k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).createTestPods(0xc000f460e0) test/e2e/framework/network/utils.go:725 +0x13e k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setupCore(0xc000f460e0, 0x47?) test/e2e/framework/network/utils.go:764 +0x9f k8s.io/kubernetes/test/e2e/framework/network.NewCoreNetworkingTestConfig(0xc000cac000, 0x1) test/e2e/framework/network/utils.go:142 +0xfb k8s.io/kubernetes/test/e2e/common/network.glob..func1.1.4() test/e2e/common/network/networking.go:106 +0x34 [AfterEach] [sig-network] Networking test/e2e/framework/framework.go:187 �[1mSTEP:�[0m Collecting events from namespace "pod-network-test-8288". �[38;5;243m01/17/23 22:32:12.366�[0m Jan 17 22:32:12.495: INFO: Unexpected error: failed to list events in namespace "pod-network-test-8288": <*url.Error | 0xc00196a930>: { Op: "Get", URL: "https://api.e2e-e2e-kops-grid-kubenet-flatcar-k25-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/pod-network-test-8288/events", Err: <*net.OpError | 0xc001954410>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc00196a900>{IP: [54, 78, 31, 51], Port: 443, Zone: ""}, Err: <*os.SyscallError | 0xc0017fc6a0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Jan 17 22:32:12.495: FAIL: failed to list events in namespace "pod-network-test-8288": Get "https://api.e2e-e2e-kops-grid-kubenet-flatcar-k25-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/pod-network-test-8288/events": dial tcp 54.78.31.51:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework.dumpEventsInNamespace(0xc001bf1590, {0xc000c6c258, 0x15}) test/e2e/framework/util.go:901 +0x191 k8s.io/kubernetes/test/e2e/framework.DumpAllNamespaceInfo({0x7ca2818, 0xc001558000}, {0xc000c6c258, 0x15}) test/e2e/framework/util.go:919 +0x8d k8s.io/kubernetes/test/e2e/framework.NewFramework.func1(0xc000cac000, 0x2?) test/e2e/framework/framework.go:181 +0x8b k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc000cac000) test/e2e/framework/framework.go:435 +0x21d �[1mSTEP:�[0m Destroying namespace "pod-network-test-8288" for this suite. �[38;5;243m01/17/23 22:32:12.495�[0m Jan 17 22:32:12.612: FAIL: Couldn't delete ns: "pod-network-test-8288": Delete "https://api.e2e-e2e-kops-grid-kubenet-flatcar-k25-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/pod-network-test-8288": dial tcp 54.78.31.51:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://api.e2e-e2e-kops-grid-kubenet-flatcar-k25-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/pod-network-test-8288", Err:(*net.OpError)(0xc0019547d0)}) Full Stack Trace panic({0x6ea2520, 0xc001928e40}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1() test/e2e/framework/ginkgowrapper/wrapper.go:73 +0x7d panic({0x6ea4740, 0xc0006741c0}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail({0xc002d1b440, 0x108}, {0xc001bf1048?, 0x735bfcc?, 0xc001bf1068?}) test/e2e/framework/ginkgowrapper/wrapper.go:77 +0x197 k8s.io/kubernetes/test/e2e/framework.Fail({0xc000011a00, 0xf3}, {0xc001bf10e0?, 0xc000c675c0?, 0xc001bf1108?}) test/e2e/framework/log.go:63 +0x145 k8s.io/kubernetes/test/e2e/framework.ExpectNoErrorWithOffset(0x1, {0x7c34da0, 0xc00196a930}, {0xc0017fc6e0?, 0x0?, 0x0?}) test/e2e/framework/expect.go:76 +0x267 k8s.io/kubernetes/test/e2e/framework.ExpectNoError(...) test/e2e/framework/expect.go:43 k8s.io/kubernetes/test/e2e/framework.dumpEventsInNamespace(0xc001bf1590, {0xc000c6c258, 0x15}) test/e2e/framework/util.go:901 +0x191 k8s.io/kubernetes/test/e2e/framework.DumpAllNamespaceInfo({0x7ca2818, 0xc001558000}, {0xc000c6c258, 0x15}) test/e2e/framework/util.go:919 +0x8d k8s.io/kubernetes/test/e2e/framework.NewFramework.func1(0xc000cac000, 0x2?) test/e2e/framework/framework.go:181 +0x8b k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc000cac000) test/e2e/framework/framework.go:435 +0x21d
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-network\]\sNetworking\sGranular\sChecks\:\sServices\sshould\sfunction\sfor\snode\-Service\:\sudp$'
test/e2e/framework/network/utils.go:725 k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).createTestPods(0xc002556000) test/e2e/framework/network/utils.go:725 +0x13e k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setupCore(0xc002556000, 0x7f6d51d1eac8?) test/e2e/framework/network/utils.go:764 +0x9f k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setup(0xc002556000, 0x3e?) test/e2e/framework/network/utils.go:776 +0x3e k8s.io/kubernetes/test/e2e/framework/network.NewNetworkingTestConfig(0xc000ae89a0, {0xc0004aef40, 0x1, 0xc000ac1f18?}) test/e2e/framework/network/utils.go:129 +0x125 k8s.io/kubernetes/test/e2e/network.glob..func21.6.5() test/e2e/network/networking.go:207 +0x51from junit_01.xml
{"msg":"FAILED [sig-network] Networking Granular Checks: Services should function for node-Service: udp","completed":0,"skipped":4,"failed":1,"failures":["[sig-network] Networking Granular Checks: Services should function for node-Service: udp"]} [BeforeEach] [sig-network] Networking test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/17/23 22:31:05.666�[0m Jan 17 22:31:05.666: INFO: >>> kubeConfig: /root/.kube/config �[1mSTEP:�[0m Building a namespace api object, basename nettest �[38;5;243m01/17/23 22:31:05.667�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/17/23 22:31:05.992�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/17/23 22:31:06.206�[0m [It] should function for node-Service: udp test/e2e/network/networking.go:206 �[1mSTEP:�[0m Performing setup for networking test in namespace nettest-8466 �[38;5;243m01/17/23 22:31:06.422�[0m �[1mSTEP:�[0m creating a selector �[38;5;243m01/17/23 22:31:06.422�[0m �[1mSTEP:�[0m Creating the service pods in kubernetes �[38;5;243m01/17/23 22:31:06.422�[0m Jan 17 22:31:06.422: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Jan 17 22:31:07.272: INFO: Waiting up to 5m0s for pod "netserver-0" in namespace "nettest-8466" to be "running and ready" Jan 17 22:31:07.399: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 126.480665ms Jan 17 22:31:07.399: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 17 22:31:09.511: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.238755824s Jan 17 22:31:09.511: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 17 22:31:11.511: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.238536157s Jan 17 22:31:11.511: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 17 22:31:13.507: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.234308318s Jan 17 22:31:13.507: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 17 22:31:15.510: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 8.237681217s Jan 17 22:31:15.510: INFO: The phase of Pod netserver-0 is Running (Ready = false) Jan 17 22:31:17.531: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 10.258455591s Jan 17 22:31:17.531: INFO: The phase of Pod netserver-0 is Running (Ready = false) Jan 17 22:31:19.511: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 12.238112306s Jan 17 22:31:19.511: INFO: The phase of Pod netserver-0 is Running (Ready = false) Jan 17 22:31:21.530: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 14.257258468s Jan 17 22:31:21.530: INFO: The phase of Pod netserver-0 is Running (Ready = false) Jan 17 22:31:23.512: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 16.239314686s Jan 17 22:31:23.512: INFO: The phase of Pod netserver-0 is Running (Ready = false) Jan 17 22:31:25.552: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 18.279347327s Jan 17 22:31:25.552: INFO: The phase of Pod netserver-0 is Running (Ready = false) Jan 17 22:31:27.515: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 20.242586689s Jan 17 22:31:27.515: INFO: The phase of Pod netserver-0 is Running (Ready = false) Jan 17 22:31:29.511: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 22.239065771s Jan 17 22:31:29.512: INFO: The phase of Pod netserver-0 is Running (Ready = false) Jan 17 22:31:31.508: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 24.23555464s Jan 17 22:31:31.508: INFO: The phase of Pod netserver-0 is Running (Ready = false) Jan 17 22:31:33.521: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 26.248395034s Jan 17 22:31:33.521: INFO: The phase of Pod netserver-0 is Running (Ready = false) Jan 17 22:31:35.508: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 28.235895812s Jan 17 22:31:35.508: INFO: The phase of Pod netserver-0 is Running (Ready = false) Jan 17 22:31:37.508: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 30.236007996s Jan 17 22:31:37.508: INFO: The phase of Pod netserver-0 is Running (Ready = false) Jan 17 22:31:39.510: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 32.237860702s Jan 17 22:31:39.510: INFO: The phase of Pod netserver-0 is Running (Ready = false) Jan 17 22:31:41.508: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 34.235373809s Jan 17 22:31:41.508: INFO: The phase of Pod netserver-0 is Running (Ready = false) Jan 17 22:31:43.508: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 36.23510918s Jan 17 22:31:43.508: INFO: The phase of Pod netserver-0 is Running (Ready = false) Jan 17 22:31:45.512: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=true. Elapsed: 38.239372813s Jan 17 22:31:45.512: INFO: The phase of Pod netserver-0 is Running (Ready = true) Jan 17 22:31:45.512: INFO: Pod "netserver-0" satisfied condition "running and ready" Jan 17 22:31:45.620: INFO: Waiting up to 5m0s for pod "netserver-1" in namespace "nettest-8466" to be "running and ready" Jan 17 22:31:45.729: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=true. Elapsed: 108.542899ms Jan 17 22:31:45.729: INFO: The phase of Pod netserver-1 is Running (Ready = true) Jan 17 22:31:45.729: INFO: Pod "netserver-1" satisfied condition "running and ready" Jan 17 22:31:45.837: INFO: Waiting up to 5m0s for pod "netserver-2" in namespace "nettest-8466" to be "running and ready" Jan 17 22:31:45.944: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=true. Elapsed: 107.570055ms Jan 17 22:31:45.944: INFO: The phase of Pod netserver-2 is Running (Ready = true) Jan 17 22:31:45.944: INFO: Pod "netserver-2" satisfied condition "running and ready" Jan 17 22:31:46.052: INFO: Waiting up to 5m0s for pod "netserver-3" in namespace "nettest-8466" to be "running and ready" Jan 17 22:31:46.160: INFO: Pod "netserver-3": Phase="Running", Reason="", readiness=false. Elapsed: 108.108204ms Jan 17 22:31:46.160: INFO: The phase of Pod netserver-3 is Running (Ready = false) Jan 17 22:31:48.268: INFO: Pod "netserver-3": Phase="Running", Reason="", readiness=false. Elapsed: 2.215993707s Jan 17 22:31:48.268: INFO: The phase of Pod netserver-3 is Running (Ready = false) Jan 17 22:31:50.269: INFO: Pod "netserver-3": Phase="Running", Reason="", readiness=true. Elapsed: 4.21706463s Jan 17 22:31:50.269: INFO: The phase of Pod netserver-3 is Running (Ready = true) Jan 17 22:31:50.269: INFO: Pod "netserver-3" satisfied condition "running and ready" �[1mSTEP:�[0m Creating test pods �[38;5;243m01/17/23 22:31:50.377�[0m Jan 17 22:31:50.603: INFO: Waiting up to 5m0s for pod "test-container-pod" in namespace "nettest-8466" to be "running" Jan 17 22:31:50.710: INFO: Pod "test-container-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 107.415797ms Jan 17 22:32:12.368: INFO: Encountered non-retryable error while getting pod nettest-8466/test-container-pod: Get "https://api.e2e-e2e-kops-grid-kubenet-flatcar-k25-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/nettest-8466/pods/test-container-pod": dial tcp 54.78.31.51:443: connect: connection refused - error from a previous attempt: unexpected EOF Jan 17 22:32:12.368: INFO: Unexpected error: <*fmt.wrapError | 0xc001b71d00>: { msg: "error while waiting for pod nettest-8466/test-container-pod to be running: Get \"https://api.e2e-e2e-kops-grid-kubenet-flatcar-k25-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/nettest-8466/pods/test-container-pod\": dial tcp 54.78.31.51:443: connect: connection refused - error from a previous attempt: unexpected EOF", err: <*rest.wrapPreviousError | 0xc001b71ce0>{ currentErr: <*url.Error | 0xc002bb4cc0>{ Op: "Get", URL: "https://api.e2e-e2e-kops-grid-kubenet-flatcar-k25-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/nettest-8466/pods/test-container-pod", Err: <*net.OpError | 0xc001b73f90>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc002b49e90>{IP: [54, 78, 31, 51], Port: 443, Zone: ""}, Err: <*os.SyscallError | 0xc001b71ca0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, }, previousError: <*errors.errorString | 0xc000192100>{s: "unexpected EOF"}, }, } Jan 17 22:32:12.368: FAIL: error while waiting for pod nettest-8466/test-container-pod to be running: Get "https://api.e2e-e2e-kops-grid-kubenet-flatcar-k25-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/nettest-8466/pods/test-container-pod": dial tcp 54.78.31.51:443: connect: connection refused - error from a previous attempt: unexpected EOF Full Stack Trace k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).createTestPods(0xc002556000) test/e2e/framework/network/utils.go:725 +0x13e k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setupCore(0xc002556000, 0x7f6d51d1eac8?) test/e2e/framework/network/utils.go:764 +0x9f k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setup(0xc002556000, 0x3e?) test/e2e/framework/network/utils.go:776 +0x3e k8s.io/kubernetes/test/e2e/framework/network.NewNetworkingTestConfig(0xc000ae89a0, {0xc0004aef40, 0x1, 0xc000ac1f18?}) test/e2e/framework/network/utils.go:129 +0x125 k8s.io/kubernetes/test/e2e/network.glob..func21.6.5() test/e2e/network/networking.go:207 +0x51 [AfterEach] [sig-network] Networking test/e2e/framework/framework.go:187 �[1mSTEP:�[0m Collecting events from namespace "nettest-8466". �[38;5;243m01/17/23 22:32:12.369�[0m Jan 17 22:32:12.493: INFO: Unexpected error: failed to list events in namespace "nettest-8466": <*url.Error | 0xc002a290e0>: { Op: "Get", URL: "https://api.e2e-e2e-kops-grid-kubenet-flatcar-k25-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/nettest-8466/events", Err: <*net.OpError | 0xc00240d180>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc002c901e0>{IP: [54, 78, 31, 51], Port: 443, Zone: ""}, Err: <*os.SyscallError | 0xc001d4cd80>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Jan 17 22:32:12.493: FAIL: failed to list events in namespace "nettest-8466": Get "https://api.e2e-e2e-kops-grid-kubenet-flatcar-k25-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/nettest-8466/events": dial tcp 54.78.31.51:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework.dumpEventsInNamespace(0xc002ea9590, {0xc000ac1a10, 0xc}) test/e2e/framework/util.go:901 +0x191 k8s.io/kubernetes/test/e2e/framework.DumpAllNamespaceInfo({0x7ca2818, 0xc000252300}, {0xc000ac1a10, 0xc}) test/e2e/framework/util.go:919 +0x8d k8s.io/kubernetes/test/e2e/framework.NewFramework.func1(0xc000ae89a0, 0x2?) test/e2e/framework/framework.go:181 +0x8b k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc000ae89a0) test/e2e/framework/framework.go:435 +0x21d �[1mSTEP:�[0m Destroying namespace "nettest-8466" for this suite. �[38;5;243m01/17/23 22:32:12.493�[0m Jan 17 22:32:12.611: FAIL: Couldn't delete ns: "nettest-8466": Delete "https://api.e2e-e2e-kops-grid-kubenet-flatcar-k25-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/nettest-8466": dial tcp 54.78.31.51:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://api.e2e-e2e-kops-grid-kubenet-flatcar-k25-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/nettest-8466", Err:(*net.OpError)(0xc00226eaa0)}) Full Stack Trace panic({0x6ea2520, 0xc002864e40}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1() test/e2e/framework/ginkgowrapper/wrapper.go:73 +0x7d panic({0x6ea4740, 0xc0002141c0}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail({0xc002a51800, 0xf6}, {0xc002ea9048?, 0x735bfcc?, 0xc002ea9068?}) test/e2e/framework/ginkgowrapper/wrapper.go:77 +0x197 k8s.io/kubernetes/test/e2e/framework.Fail({0xc001cd4e10, 0xe1}, {0xc002ea90e0?, 0xc001d073f0?, 0xc002ea9108?}) test/e2e/framework/log.go:63 +0x145 k8s.io/kubernetes/test/e2e/framework.ExpectNoErrorWithOffset(0x1, {0x7c34da0, 0xc002a290e0}, {0xc001d4cdc0?, 0x0?, 0x0?}) test/e2e/framework/expect.go:76 +0x267 k8s.io/kubernetes/test/e2e/framework.ExpectNoError(...) test/e2e/framework/expect.go:43 k8s.io/kubernetes/test/e2e/framework.dumpEventsInNamespace(0xc002ea9590, {0xc000ac1a10, 0xc}) test/e2e/framework/util.go:901 +0x191 k8s.io/kubernetes/test/e2e/framework.DumpAllNamespaceInfo({0x7ca2818, 0xc000252300}, {0xc000ac1a10, 0xc}) test/e2e/framework/util.go:919 +0x8d k8s.io/kubernetes/test/e2e/framework.NewFramework.func1(0xc000ae89a0, 0x2?) test/e2e/framework/framework.go:181 +0x8b k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc000ae89a0) test/e2e/framework/framework.go:435 +0x21d
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-node\]\sInitContainer\s\[NodeConformance\]\sshould\sinvoke\sinit\scontainers\son\sa\sRestartNever\spod\s\[Conformance\]$'
test/e2e/common/node/init_container.go:227 k8s.io/kubernetes/test/e2e/common/node.glob..func8.2() test/e2e/common/node/init_container.go:227 +0x9cafrom junit_01.xml
E0117 22:32:11.385404 6704 retrywatcher.go:130] "Watch failed" err="Get \"https://api.e2e-e2e-kops-grid-kubenet-flatcar-k25-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/init-container-6533/pods?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dpod-init-af4069c4-2e52-4aa6-a660-a5c56787265e&resourceVersion=4021&watch=true\": dial tcp 54.78.31.51:443: connect: connection refused" E0117 22:32:12.349138 6704 retrywatcher.go:130] "Watch failed" err="Get \"https://api.e2e-e2e-kops-grid-kubenet-flatcar-k25-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/init-container-6533/pods?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dpod-init-af4069c4-2e52-4aa6-a660-a5c56787265e&resourceVersion=4021&watch=true\": dial tcp 54.78.31.51:443: connect: connection refused" {"msg":"FAILED [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","completed":1,"skipped":6,"failed":1,"failures":["[sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]"]} [BeforeEach] [sig-node] InitContainer [NodeConformance] test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/17/23 22:31:41.511�[0m Jan 17 22:31:41.511: INFO: >>> kubeConfig: /root/.kube/config �[1mSTEP:�[0m Building a namespace api object, basename init-container �[38;5;243m01/17/23 22:31:41.512�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/17/23 22:31:41.831�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/17/23 22:31:42.041�[0m [BeforeEach] [sig-node] InitContainer [NodeConformance] test/e2e/common/node/init_container.go:164 [It] should invoke init containers on a RestartNever pod [Conformance] test/e2e/common/node/init_container.go:176 �[1mSTEP:�[0m creating the pod �[38;5;243m01/17/23 22:31:42.251�[0m Jan 17 22:31:42.251: INFO: PodSpec: initContainers in spec.initContainers E0117 22:32:11.385404 6704 retrywatcher.go:130] "Watch failed" err="Get \"https://api.e2e-e2e-kops-grid-kubenet-flatcar-k25-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/init-container-6533/pods?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dpod-init-af4069c4-2e52-4aa6-a660-a5c56787265e&resourceVersion=4021&watch=true\": dial tcp 54.78.31.51:443: connect: connection refused" E0117 22:32:12.349138 6704 retrywatcher.go:130] "Watch failed" err="Get \"https://api.e2e-e2e-kops-grid-kubenet-flatcar-k25-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/init-container-6533/pods?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dpod-init-af4069c4-2e52-4aa6-a660-a5c56787265e&resourceVersion=4021&watch=true\": dial tcp 54.78.31.51:443: connect: connection refused" Jan 17 22:32:44.599: INFO: Unexpected error: <*errors.errorString | 0xc000043080>: { s: "watch closed before UntilWithoutRetry timeout", } Jan 17 22:32:44.599: FAIL: watch closed before UntilWithoutRetry timeout Full Stack Trace k8s.io/kubernetes/test/e2e/common/node.glob..func8.2() test/e2e/common/node/init_container.go:227 +0x9ca [AfterEach] [sig-node] InitContainer [NodeConformance] test/e2e/framework/framework.go:187 �[1mSTEP:�[0m Collecting events from namespace "init-container-6533". �[38;5;243m01/17/23 22:32:44.599�[0m �[1mSTEP:�[0m Found 10 events. �[38;5;243m01/17/23 22:32:44.706�[0m Jan 17 22:32:44.706: INFO: At 2023-01-17 22:31:42 +0000 UTC - event for pod-init-af4069c4-2e52-4aa6-a660-a5c56787265e: {default-scheduler } Scheduled: Successfully assigned init-container-6533/pod-init-af4069c4-2e52-4aa6-a660-a5c56787265e to i-07023e4c3916cc727 Jan 17 22:32:44.706: INFO: At 2023-01-17 22:31:43 +0000 UTC - event for pod-init-af4069c4-2e52-4aa6-a660-a5c56787265e: {kubelet i-07023e4c3916cc727} Pulled: Container image "registry.k8s.io/e2e-test-images/busybox:1.29-2" already present on machine Jan 17 22:32:44.706: INFO: At 2023-01-17 22:31:43 +0000 UTC - event for pod-init-af4069c4-2e52-4aa6-a660-a5c56787265e: {kubelet i-07023e4c3916cc727} Created: Created container init1 Jan 17 22:32:44.706: INFO: At 2023-01-17 22:31:43 +0000 UTC - event for pod-init-af4069c4-2e52-4aa6-a660-a5c56787265e: {kubelet i-07023e4c3916cc727} Started: Started container init1 Jan 17 22:32:44.706: INFO: At 2023-01-17 22:31:44 +0000 UTC - event for pod-init-af4069c4-2e52-4aa6-a660-a5c56787265e: {kubelet i-07023e4c3916cc727} Pulled: Container image "registry.k8s.io/e2e-test-images/busybox:1.29-2" already present on machine Jan 17 22:32:44.706: INFO: At 2023-01-17 22:31:44 +0000 UTC - event for pod-init-af4069c4-2e52-4aa6-a660-a5c56787265e: {kubelet i-07023e4c3916cc727} Created: Created container init2 Jan 17 22:32:44.706: INFO: At 2023-01-17 22:31:44 +0000 UTC - event for pod-init-af4069c4-2e52-4aa6-a660-a5c56787265e: {kubelet i-07023e4c3916cc727} Started: Started container init2 Jan 17 22:32:44.706: INFO: At 2023-01-17 22:31:45 +0000 UTC - event for pod-init-af4069c4-2e52-4aa6-a660-a5c56787265e: {kubelet i-07023e4c3916cc727} Pulled: Container image "registry.k8s.io/e2e-test-images/busybox:1.29-2" already present on machine Jan 17 22:32:44.706: INFO: At 2023-01-17 22:31:45 +0000 UTC - event for pod-init-af4069c4-2e52-4aa6-a660-a5c56787265e: {kubelet i-07023e4c3916cc727} Created: Created container run1 Jan 17 22:32:44.706: INFO: At 2023-01-17 22:31:45 +0000 UTC - event for pod-init-af4069c4-2e52-4aa6-a660-a5c56787265e: {kubelet i-07023e4c3916cc727} Started: Started container run1 Jan 17 22:32:44.813: INFO: POD NODE PHASE GRACE CONDITIONS Jan 17 22:32:44.813: INFO: pod-init-af4069c4-2e52-4aa6-a660-a5c56787265e i-07023e4c3916cc727 Succeeded [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-17 22:31:45 +0000 UTC PodCompleted } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-17 22:31:42 +0000 UTC PodCompleted } {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-17 22:31:42 +0000 UTC PodCompleted } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-17 22:31:42 +0000 UTC }] Jan 17 22:32:44.813: INFO: Jan 17 22:32:45.140: INFO: Logging node info for node i-0242e0df14fd9a246 Jan 17 22:32:45.247: INFO: Node Info: &Node{ObjectMeta:{i-0242e0df14fd9a246 0b21abc2-41c7-4385-8f7a-1e581a05d7f6 4610 0 2023-01-17 22:23:42 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:eu-west-1 failure-domain.beta.kubernetes.io/zone:eu-west-1a kubernetes.io/arch:amd64 kubernetes.io/hostname:i-0242e0df14fd9a246 kubernetes.io/os:linux node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:eu-west-1a topology.kubernetes.io/region:eu-west-1 topology.kubernetes.io/zone:eu-west-1a] map[csi.volume.kubernetes.io/nodeid:{"ebs.csi.aws.com":"i-0242e0df14fd9a246"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{aws-cloud-controller-manager Update v1 2023-01-17 22:23:42 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:beta.kubernetes.io/instance-type":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kops-controller Update v1 2023-01-17 22:23:42 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:node-role.kubernetes.io/node":{}}}} } {kubelet Update v1 2023-01-17 22:23:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}}} } {aws-cloud-controller-manager Update v1 2023-01-17 22:23:51 +0000 UTC FieldsV1 {"f:status":{"f:addresses":{"k:{\"type\":\"ExternalDNS\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"ExternalIP\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"Hostname\"}":{"f:address":{}},"k:{\"type\":\"InternalDNS\"}":{".":{},"f:address":{},"f:type":{}}},"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2023-01-17 22:24:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.4.0/24\"":{}}}} } {kube-controller-manager Update v1 2023-01-17 22:31:13 +0000 UTC FieldsV1 {"f:status":{"f:volumesAttached":{}}} status} {kubelet Update v1 2023-01-17 22:32:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{}}},"f:status":{"f:allocatable":{"f:memory":{}},"f:capacity":{"f:memory":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{},"f:kernelVersion":{},"f:osImage":{}},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.4.0/24,DoNotUseExternalID:,ProviderID:aws:///eu-west-1a/i-0242e0df14fd9a246,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{47441653760 0} {<nil>} 46329740Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4054806528 0} {<nil>} 3959772Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{42697488314 0} {<nil>} 42697488314 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3949948928 0} {<nil>} 3857372Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-17 22:23:51 +0000 UTC,LastTransitionTime:2023-01-17 22:23:51 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-17 22:32:39 +0000 UTC,LastTransitionTime:2023-01-17 22:23:21 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-17 22:32:39 +0000 UTC,LastTransitionTime:2023-01-17 22:23:21 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-17 22:32:39 +0000 UTC,LastTransitionTime:2023-01-17 22:23:21 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-17 22:32:39 +0000 UTC,LastTransitionTime:2023-01-17 22:24:55 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.55.200,},NodeAddress{Type:ExternalIP,Address:52.213.7.85,},NodeAddress{Type:InternalDNS,Address:i-0242e0df14fd9a246.eu-west-1.compute.internal,},NodeAddress{Type:Hostname,Address:i-0242e0df14fd9a246.eu-west-1.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-52-213-7-85.eu-west-1.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec2999a568d8f64a6bb6da38e830c71d,SystemUUID:ec2999a5-68d8-f64a-6bb6-da38e830c71d,BootID:59d34639-aa2c-481c-843b-ffd918a461a4,KernelVersion:5.15.86-flatcar,OSImage:Flatcar Container Linux by Kinvolk 3446.1.0 (Oklo),ContainerRuntimeVersion:containerd://1.6.10,KubeletVersion:v1.25.5,KubeProxyVersion:v1.25.5,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:11e6a66017ba4e4b938c1612b7a54a3befcefd354796c04e1dba76873a13518e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.5],SizeBytes:112030526,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c registry.k8s.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.25.5],SizeBytes:63291081,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:af7e3857d87770ddb40f5ea4f89b5a2709504ab1ee31f9ea4ab5823c045f2146 registry.k8s.io/e2e-test-images/agnhost:2.40],SizeBytes:51155161,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 registry.k8s.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:40764680,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:f0c5de192d832e7c1daa6580d4a62e8fa6fc8eabc0917ae4cb7ed4d15e95b59e registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.1],SizeBytes:29725845,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/sample-apiserver@sha256:f9c93b92b6ff750b41a93c4e4fe0bfe384597aeb841e2539d5444815c55b2d8f registry.k8s.io/e2e-test-images/sample-apiserver:1.17.5],SizeBytes:24316368,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:8240918,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 registry.k8s.io/e2e-test-images/nginx:1.14-2],SizeBytes:6979041,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},ContainerImage{Names:[registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[kubernetes.io/csi/ebs.csi.aws.com^vol-06e67f2dd87e85985],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/ebs.csi.aws.com^vol-02cb30d8f11b6a982,DevicePath:,},},Config:nil,},} Jan 17 22:32:45.247: INFO: Logging kubelet events for node i-0242e0df14fd9a246 Jan 17 22:32:45.357: INFO: Logging pods the kubelet thinks is on node i-0242e0df14fd9a246 Jan 17 22:32:45.700: INFO: netserver-0 started at 2023-01-17 22:31:06 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:45.700: INFO: Container webserver ready: true, restart count 0 Jan 17 22:32:45.700: INFO: simpletest.rc-wz4hm started at 2023-01-17 22:31:19 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:45.700: INFO: Container nginx ready: true, restart count 0 Jan 17 22:32:45.700: INFO: simpletest.rc-lmwjz started at 2023-01-17 22:31:21 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:45.700: INFO: Container nginx ready: true, restart count 0 Jan 17 22:32:45.700: INFO: sample-apiserver-deployment-5885c99c55-hdptf started at 2023-01-17 22:31:42 +0000 UTC (0+2 container statuses recorded) Jan 17 22:32:45.700: INFO: Container etcd ready: false, restart count 0 Jan 17 22:32:45.700: INFO: Container sample-apiserver ready: false, restart count 0 Jan 17 22:32:45.700: INFO: pod-subpath-test-dynamicpv-cqdc started at 2023-01-17 22:31:47 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:45.700: INFO: Container test-container-subpath-dynamicpv-cqdc ready: false, restart count 0 Jan 17 22:32:45.700: INFO: startup-c485c102-c6f2-4c81-bcc6-fd69419d3aff started at 2023-01-17 22:31:06 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:45.700: INFO: Container busybox ready: false, restart count 0 Jan 17 22:32:45.700: INFO: simpletest.rc-48twj started at 2023-01-17 22:31:19 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:45.700: INFO: Container nginx ready: true, restart count 0 Jan 17 22:32:45.700: INFO: simpletest.rc-pzdql started at 2023-01-17 22:31:19 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:45.700: INFO: Container nginx ready: true, restart count 0 Jan 17 22:32:45.700: INFO: simpletest.rc-b7qmc started at 2023-01-17 22:31:20 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:45.700: INFO: Container nginx ready: true, restart count 0 Jan 17 22:32:45.700: INFO: hostpath-symlink-prep-provisioning-2558 started at <nil> (0+0 container statuses recorded) Jan 17 22:32:45.700: INFO: deployment-shared-unset-79c9978db8-sd7qj started at 2023-01-17 22:31:06 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:45.700: INFO: Container nginx ready: false, restart count 0 Jan 17 22:32:45.700: INFO: webserver-deployment-845c8977d9-nfhj6 started at 2023-01-17 22:31:08 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:45.700: INFO: Container httpd ready: true, restart count 0 Jan 17 22:32:45.700: INFO: simpletest.rc-nj774 started at 2023-01-17 22:31:18 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:45.700: INFO: Container nginx ready: true, restart count 0 Jan 17 22:32:45.700: INFO: test-ss-0 started at 2023-01-17 22:31:35 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:45.700: INFO: Container webserver ready: true, restart count 0 Jan 17 22:32:45.700: INFO: webserver-deployment-69b7448995-n5jkz started at 2023-01-17 22:31:43 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:45.700: INFO: Container httpd ready: false, restart count 0 Jan 17 22:32:45.700: INFO: simpletest.rc-hs4vc started at 2023-01-17 22:31:19 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:45.700: INFO: Container nginx ready: true, restart count 0 Jan 17 22:32:45.700: INFO: simpletest.rc-v44gx started at 2023-01-17 22:31:19 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:45.700: INFO: Container nginx ready: true, restart count 0 Jan 17 22:32:45.700: INFO: simpletest.rc-ft7tv started at 2023-01-17 22:31:19 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:45.700: INFO: Container nginx ready: true, restart count 0 Jan 17 22:32:45.700: INFO: webserver-deployment-69b7448995-84hk5 started at <nil> (0+0 container statuses recorded) Jan 17 22:32:45.700: INFO: test-container-pod started at <nil> (0+0 container statuses recorded) Jan 17 22:32:45.700: INFO: ebs-csi-node-j85wb started at 2023-01-17 22:23:42 +0000 UTC (0+3 container statuses recorded) Jan 17 22:32:45.700: INFO: Container ebs-plugin ready: true, restart count 2 Jan 17 22:32:45.700: INFO: Container liveness-probe ready: true, restart count 1 Jan 17 22:32:45.700: INFO: Container node-driver-registrar ready: true, restart count 1 Jan 17 22:32:45.700: INFO: webserver-deployment-845c8977d9-plfll started at 2023-01-17 22:31:08 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:45.700: INFO: Container httpd ready: true, restart count 0 Jan 17 22:32:45.700: INFO: rs-89fnm started at 2023-01-17 22:31:17 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:45.700: INFO: Container donothing ready: true, restart count 0 Jan 17 22:32:45.700: INFO: simpletest.rc-q8rdm started at 2023-01-17 22:31:18 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:45.700: INFO: Container nginx ready: true, restart count 0 Jan 17 22:32:45.700: INFO: simpletest.rc-fp2rb started at 2023-01-17 22:31:21 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:45.700: INFO: Container nginx ready: true, restart count 0 Jan 17 22:32:45.700: INFO: webserver-deployment-845c8977d9-gk7sj started at 2023-01-17 22:31:46 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:45.700: INFO: Container httpd ready: false, restart count 0 Jan 17 22:32:45.700: INFO: pod-projected-secrets-b5983a6e-81db-4e4d-80e0-dcb409783d38 started at 2023-01-17 22:31:47 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:45.700: INFO: Container secret-volume-test ready: false, restart count 0 Jan 17 22:32:45.700: INFO: rs-qb8g6 started at <nil> (0+0 container statuses recorded) Jan 17 22:32:45.700: INFO: simpletest.rc-w8xxk started at 2023-01-17 22:31:18 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:45.700: INFO: Container nginx ready: true, restart count 0 Jan 17 22:32:45.700: INFO: all-succeed-dwbk2 started at <nil> (0+0 container statuses recorded) Jan 17 22:32:45.700: INFO: host-test-container-pod started at 2023-01-17 22:31:50 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:45.700: INFO: Container agnhost-container ready: false, restart count 0 Jan 17 22:32:45.700: INFO: busybox-0eeae1b6-87b2-4f12-8590-0524d597bfe3 started at 2023-01-17 22:31:06 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:45.700: INFO: Container busybox ready: true, restart count 0 Jan 17 22:32:45.700: INFO: simpletest.rc-j6qgk started at 2023-01-17 22:31:21 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:45.700: INFO: Container nginx ready: true, restart count 0 Jan 17 22:32:45.700: INFO: simpletest.rc-gmh7p started at 2023-01-17 22:31:21 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:45.700: INFO: Container nginx ready: true, restart count 0 Jan 17 22:32:45.700: INFO: simpletest.rc-2b882 started at 2023-01-17 22:31:18 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:45.700: INFO: Container nginx ready: true, restart count 0 Jan 17 22:32:45.700: INFO: simpletest.rc-krcwv started at 2023-01-17 22:31:18 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:45.700: INFO: Container nginx ready: true, restart count 0 Jan 17 22:32:45.700: INFO: simpletest.rc-b6kpc started at 2023-01-17 22:31:20 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:45.700: INFO: Container nginx ready: true, restart count 0 Jan 17 22:32:45.700: INFO: webserver-deployment-845c8977d9-sbsz8 started at 2023-01-17 22:31:46 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:45.700: INFO: Container httpd ready: false, restart count 0 Jan 17 22:32:45.700: INFO: webserver-deployment-845c8977d9-4gdqf started at <nil> (0+0 container statuses recorded) Jan 17 22:32:45.700: INFO: simpletest.rc-ptsqn started at 2023-01-17 22:31:19 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:45.700: INFO: Container nginx ready: true, restart count 0 Jan 17 22:32:45.700: INFO: simpletest.rc-sfplv started at 2023-01-17 22:31:22 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:45.700: INFO: Container nginx ready: true, restart count 0 Jan 17 22:32:45.700: INFO: kube-proxy-i-0242e0df14fd9a246 started at 2023-01-17 22:23:22 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:45.700: INFO: Container kube-proxy ready: true, restart count 1 Jan 17 22:32:45.700: INFO: netserver-0 started at 2023-01-17 22:31:16 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:45.700: INFO: Container webserver ready: true, restart count 0 Jan 17 22:32:45.700: INFO: all-succeed-6zmzw started at 2023-01-17 22:31:29 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:45.700: INFO: Container c ready: false, restart count 0 Jan 17 22:32:45.700: INFO: hostexec-i-0242e0df14fd9a246-g7tf7 started at <nil> (0+0 container statuses recorded) Jan 17 22:32:45.700: INFO: exec-volume-test-dynamicpv-plg8 started at 2023-01-17 22:31:10 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:45.700: INFO: Container exec-container-dynamicpv-plg8 ready: false, restart count 0 Jan 17 22:32:45.700: INFO: simpletest.rc-wz9vn started at 2023-01-17 22:31:19 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:45.700: INFO: Container nginx ready: true, restart count 0 Jan 17 22:32:45.700: INFO: simpletest.rc-6wxsg started at 2023-01-17 22:31:19 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:45.700: INFO: Container nginx ready: true, restart count 0 Jan 17 22:32:45.700: INFO: simpletest.rc-72njr started at 2023-01-17 22:31:20 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:45.700: INFO: Container nginx ready: true, restart count 0 Jan 17 22:32:45.700: INFO: simpletest.rc-68pvv started at 2023-01-17 22:31:21 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:45.700: INFO: Container nginx ready: true, restart count 0 Jan 17 22:32:45.700: INFO: simpletest.rc-w5z4k started at 2023-01-17 22:31:20 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:45.700: INFO: Container nginx ready: true, restart count 0 Jan 17 22:32:45.700: INFO: all-succeed-xwz8t started at 2023-01-17 22:31:46 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:45.700: INFO: Container c ready: false, restart count 0 Jan 17 22:32:45.700: INFO: simpletest.rc-mws4p started at 2023-01-17 22:31:22 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:45.700: INFO: Container nginx ready: true, restart count 0 Jan 17 22:32:45.700: INFO: webserver-deployment-69b7448995-tfz8n started at 2023-01-17 22:31:43 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:45.700: INFO: Container httpd ready: false, restart count 0 Jan 17 22:32:45.700: INFO: pod-projected-configmaps-3dafe73a-fa3c-4291-b696-9457aebfb2a3 started at 2023-01-17 22:31:44 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:45.700: INFO: Container agnhost-container ready: false, restart count 0 Jan 17 22:32:46.490: INFO: Latency metrics for node i-0242e0df14fd9a246 Jan 17 22:32:46.490: INFO: Logging node info for node i-0343380b4938db9ae Jan 17 22:32:46.600: INFO: Node Info: &Node{ObjectMeta:{i-0343380b4938db9ae f0ba41a2-7254-4dfe-a6dc-6961e20c2727 4561 0 2023-01-17 22:23:33 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:eu-west-1 failure-domain.beta.kubernetes.io/zone:eu-west-1a kubernetes.io/arch:amd64 kubernetes.io/hostname:i-0343380b4938db9ae kubernetes.io/os:linux node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:eu-west-1a topology.kubernetes.io/region:eu-west-1 topology.kubernetes.io/zone:eu-west-1a] map[csi.volume.kubernetes.io/nodeid:{"ebs.csi.aws.com":"i-0343380b4938db9ae"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{aws-cloud-controller-manager Update v1 2023-01-17 22:23:33 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:beta.kubernetes.io/instance-type":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kubelet Update v1 2023-01-17 22:23:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}}} } {kops-controller Update v1 2023-01-17 22:23:34 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:node-role.kubernetes.io/node":{}}}} } {aws-cloud-controller-manager Update v1 2023-01-17 22:23:41 +0000 UTC FieldsV1 {"f:status":{"f:addresses":{"k:{\"type\":\"ExternalDNS\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"ExternalIP\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"Hostname\"}":{"f:address":{}},"k:{\"type\":\"InternalDNS\"}":{".":{},"f:address":{},"f:type":{}}},"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2023-01-17 22:28:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.2.0/24\"":{}}}} } {kube-controller-manager Update v1 2023-01-17 22:31:15 +0000 UTC FieldsV1 {"f:status":{"f:volumesAttached":{}}} status} {kubelet Update v1 2023-01-17 22:32:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{}}},"f:status":{"f:allocatable":{"f:memory":{}},"f:capacity":{"f:memory":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{},"f:kernelVersion":{},"f:osImage":{}}}} status}]},Spec:NodeSpec{PodCIDR:100.96.2.0/24,DoNotUseExternalID:,ProviderID:aws:///eu-west-1a/i-0343380b4938db9ae,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{47441653760 0} {<nil>} 46329740Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4054806528 0} {<nil>} 3959772Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{42697488314 0} {<nil>} 42697488314 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3949948928 0} {<nil>} 3857372Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-17 22:23:41 +0000 UTC,LastTransitionTime:2023-01-17 22:23:41 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-17 22:32:38 +0000 UTC,LastTransitionTime:2023-01-17 22:23:23 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-17 22:32:38 +0000 UTC,LastTransitionTime:2023-01-17 22:23:23 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-17 22:32:38 +0000 UTC,LastTransitionTime:2023-01-17 22:23:23 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-17 22:32:38 +0000 UTC,LastTransitionTime:2023-01-17 22:28:40 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.52.8,},NodeAddress{Type:ExternalIP,Address:34.247.32.45,},NodeAddress{Type:InternalDNS,Address:i-0343380b4938db9ae.eu-west-1.compute.internal,},NodeAddress{Type:Hostname,Address:i-0343380b4938db9ae.eu-west-1.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-34-247-32-45.eu-west-1.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec22f9541f6e4acecb3762e769a2a117,SystemUUID:ec22f954-1f6e-4ace-cb37-62e769a2a117,BootID:9c8e1afb-e8db-4c0f-83ad-ba6d32d6cbde,KernelVersion:5.15.86-flatcar,OSImage:Flatcar Container Linux by Kinvolk 3446.1.0 (Oklo),ContainerRuntimeVersion:containerd://1.6.10,KubeletVersion:v1.25.5,KubeProxyVersion:v1.25.5,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.25.5],SizeBytes:63291081,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:af7e3857d87770ddb40f5ea4f89b5a2709504ab1ee31f9ea4ab5823c045f2146 registry.k8s.io/e2e-test-images/agnhost:2.40],SizeBytes:51155161,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 registry.k8s.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:40764680,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:f0c5de192d832e7c1daa6580d4a62e8fa6fc8eabc0917ae4cb7ed4d15e95b59e registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.1],SizeBytes:29725845,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:8240918,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 registry.k8s.io/e2e-test-images/nginx:1.14-2],SizeBytes:6979041,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},ContainerImage{Names:[registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/ebs.csi.aws.com^vol-073502f7651aa020c,DevicePath:,},},Config:nil,},} Jan 17 22:32:46.601: INFO: Logging kubelet events for node i-0343380b4938db9ae Jan 17 22:32:46.728: INFO: Logging pods the kubelet thinks is on node i-0343380b4938db9ae Jan 17 22:32:46.855: INFO: simpletest.rc-lknsp started at 2023-01-17 22:31:19 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:46.855: INFO: Container nginx ready: true, restart count 0 Jan 17 22:32:46.855: INFO: webserver-deployment-845c8977d9-5vmhz started at 2023-01-17 22:31:46 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:46.855: INFO: Container httpd ready: true, restart count 0 Jan 17 22:32:46.855: INFO: webserver-deployment-69b7448995-bnjh5 started at 2023-01-17 22:31:46 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:46.855: INFO: Container httpd ready: false, restart count 0 Jan 17 22:32:46.855: INFO: kube-proxy-i-0343380b4938db9ae started at 2023-01-17 22:23:24 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:46.855: INFO: Container kube-proxy ready: true, restart count 1 Jan 17 22:32:46.855: INFO: simpletest.rc-r6j9l started at 2023-01-17 22:31:18 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:46.855: INFO: Container nginx ready: true, restart count 0 Jan 17 22:32:46.855: INFO: simpletest.rc-m4jnp started at 2023-01-17 22:31:18 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:46.855: INFO: Container nginx ready: true, restart count 0 Jan 17 22:32:46.855: INFO: simpletest.rc-b8hnr started at 2023-01-17 22:31:20 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:46.855: INFO: Container nginx ready: true, restart count 0 Jan 17 22:32:46.855: INFO: simpletest.rc-7hk4t started at 2023-01-17 22:31:21 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:46.855: INFO: Container nginx ready: true, restart count 0 Jan 17 22:32:46.855: INFO: simpletest.rc-tprcg started at 2023-01-17 22:31:22 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:46.855: INFO: Container nginx ready: true, restart count 0 Jan 17 22:32:46.855: INFO: ebs-csi-node-8z29v started at 2023-01-17 22:28:31 +0000 UTC (0+3 container statuses recorded) Jan 17 22:32:46.855: INFO: Container ebs-plugin ready: true, restart count 0 Jan 17 22:32:46.855: INFO: Container liveness-probe ready: true, restart count 0 Jan 17 22:32:46.855: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 17 22:32:46.855: INFO: netserver-1 started at 2023-01-17 22:31:16 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:46.855: INFO: Container webserver ready: true, restart count 0 Jan 17 22:32:46.855: INFO: deployment-shared-unset-79c9978db8-7mq67 started at 2023-01-17 22:31:06 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:46.855: INFO: Container nginx ready: false, restart count 0 Jan 17 22:32:46.855: INFO: simpletest.rc-c8wrj started at 2023-01-17 22:31:19 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:46.855: INFO: Container nginx ready: true, restart count 0 Jan 17 22:32:46.855: INFO: simpletest.rc-vr9n8 started at 2023-01-17 22:31:19 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:46.855: INFO: Container nginx ready: true, restart count 0 Jan 17 22:32:46.855: INFO: simpletest.rc-lprmm started at 2023-01-17 22:31:18 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:46.855: INFO: Container nginx ready: true, restart count 0 Jan 17 22:32:46.855: INFO: simpletest.rc-8pw96 started at 2023-01-17 22:31:19 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:46.855: INFO: Container nginx ready: true, restart count 0 Jan 17 22:32:46.855: INFO: simpletest.rc-rsfll started at 2023-01-17 22:31:20 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:46.855: INFO: Container nginx ready: true, restart count 0 Jan 17 22:32:46.855: INFO: simpletest.rc-vhhwc started at 2023-01-17 22:31:21 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:46.855: INFO: Container nginx ready: true, restart count 0 Jan 17 22:32:46.855: INFO: simpletest.rc-fk8jg started at 2023-01-17 22:31:18 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:46.855: INFO: Container nginx ready: true, restart count 0 Jan 17 22:32:46.855: INFO: simpletest.rc-snjt7 started at 2023-01-17 22:31:20 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:46.855: INFO: Container nginx ready: true, restart count 0 Jan 17 22:32:46.855: INFO: simpletest.rc-jqd57 started at 2023-01-17 22:31:20 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:46.855: INFO: Container nginx ready: true, restart count 0 Jan 17 22:32:46.855: INFO: inline-volume-tester-p7ntr started at 2023-01-17 22:31:13 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:46.855: INFO: Container csi-volume-tester ready: false, restart count 0 Jan 17 22:32:46.855: INFO: simpletest.rc-z4fmc started at 2023-01-17 22:31:18 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:46.855: INFO: Container nginx ready: true, restart count 0 Jan 17 22:32:46.855: INFO: simpletest.rc-8gzx8 started at 2023-01-17 22:31:18 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:46.855: INFO: Container nginx ready: true, restart count 0 Jan 17 22:32:46.855: INFO: simpletest.rc-xq8h9 started at 2023-01-17 22:31:19 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:46.855: INFO: Container nginx ready: true, restart count 0 Jan 17 22:32:46.855: INFO: simpletest.rc-g75fv started at 2023-01-17 22:31:21 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:46.855: INFO: Container nginx ready: true, restart count 0 Jan 17 22:32:46.855: INFO: simpletest.rc-n8fnm started at 2023-01-17 22:31:21 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:46.855: INFO: Container nginx ready: true, restart count 0 Jan 17 22:32:46.855: INFO: webserver-deployment-69b7448995-zm4kv started at 2023-01-17 22:31:46 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:46.855: INFO: Container httpd ready: false, restart count 0 Jan 17 22:32:46.855: INFO: webserver-deployment-845c8977d9-rjkj9 started at 2023-01-17 22:31:46 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:46.855: INFO: Container httpd ready: true, restart count 0 Jan 17 22:32:46.855: INFO: coredns-85d58b74c8-4sqt8 started at 2023-01-17 22:24:10 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:46.855: INFO: Container coredns ready: true, restart count 1 Jan 17 22:32:46.855: INFO: webserver-deployment-845c8977d9-wwpfg started at 2023-01-17 22:31:08 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:46.855: INFO: Container httpd ready: true, restart count 0 Jan 17 22:32:46.855: INFO: webserver-deployment-845c8977d9-mf2mf started at 2023-01-17 22:31:08 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:46.855: INFO: Container httpd ready: true, restart count 0 Jan 17 22:32:46.855: INFO: simpletest.rc-xbpb6 started at 2023-01-17 22:31:19 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:46.855: INFO: Container nginx ready: true, restart count 0 Jan 17 22:32:46.855: INFO: simpletest.rc-6rrw4 started at 2023-01-17 22:31:20 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:46.855: INFO: Container nginx ready: true, restart count 0 Jan 17 22:32:46.855: INFO: simpletest.rc-95xz8 started at 2023-01-17 22:31:21 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:46.855: INFO: Container nginx ready: true, restart count 0 Jan 17 22:32:46.855: INFO: webserver-deployment-69b7448995-w5kn8 started at 2023-01-17 22:31:43 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:46.855: INFO: Container httpd ready: false, restart count 0 Jan 17 22:32:46.855: INFO: netserver-1 started at 2023-01-17 22:31:06 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:46.855: INFO: Container webserver ready: true, restart count 0 Jan 17 22:32:46.855: INFO: webserver-deployment-845c8977d9-g5kxc started at 2023-01-17 22:31:08 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:46.855: INFO: Container httpd ready: true, restart count 0 Jan 17 22:32:46.855: INFO: simpletest.rc-996xj started at 2023-01-17 22:31:19 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:46.855: INFO: Container nginx ready: true, restart count 0 Jan 17 22:32:47.411: INFO: Latency metrics for node i-0343380b4938db9ae Jan 17 22:32:47.411: INFO: Logging node info for node i-05a4ff7b848c70e4e Jan 17 22:32:47.524: INFO: Node Info: &Node{ObjectMeta:{i-05a4ff7b848c70e4e 2b1b6da2-ac5c-4314-a72a-a4fdd7d9499e 3269 0 2023-01-17 22:23:33 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:eu-west-1 failure-domain.beta.kubernetes.io/zone:eu-west-1a kubernetes.io/arch:amd64 kubernetes.io/hostname:i-05a4ff7b848c70e4e kubernetes.io/os:linux node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:eu-west-1a topology.kubernetes.io/region:eu-west-1 topology.kubernetes.io/zone:eu-west-1a] map[csi.volume.kubernetes.io/nodeid:{"ebs.csi.aws.com":"i-05a4ff7b848c70e4e"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{aws-cloud-controller-manager Update v1 2023-01-17 22:23:33 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:beta.kubernetes.io/instance-type":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kops-controller Update v1 2023-01-17 22:23:33 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:node-role.kubernetes.io/node":{}}}} } {kube-controller-manager Update v1 2023-01-17 22:23:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.1.0/24\"":{}}}} } {kubelet Update v1 2023-01-17 22:23:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}}} } {aws-cloud-controller-manager Update v1 2023-01-17 22:23:41 +0000 UTC FieldsV1 {"f:status":{"f:addresses":{"k:{\"type\":\"ExternalDNS\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"ExternalIP\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"Hostname\"}":{"f:address":{}},"k:{\"type\":\"InternalDNS\"}":{".":{},"f:address":{},"f:type":{}}},"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2023-01-17 22:31:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.1.0/24,DoNotUseExternalID:,ProviderID:aws:///eu-west-1a/i-05a4ff7b848c70e4e,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{47441653760 0} {<nil>} 46329740Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4054786048 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{42697488314 0} {<nil>} 42697488314 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3949928448 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-17 22:23:41 +0000 UTC,LastTransitionTime:2023-01-17 22:23:41 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-17 22:31:24 +0000 UTC,LastTransitionTime:2023-01-17 22:23:23 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-17 22:31:24 +0000 UTC,LastTransitionTime:2023-01-17 22:23:23 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-17 22:31:24 +0000 UTC,LastTransitionTime:2023-01-17 22:23:23 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-17 22:31:24 +0000 UTC,LastTransitionTime:2023-01-17 22:23:33 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.42.34,},NodeAddress{Type:ExternalIP,Address:54.171.91.207,},NodeAddress{Type:InternalDNS,Address:i-05a4ff7b848c70e4e.eu-west-1.compute.internal,},NodeAddress{Type:Hostname,Address:i-05a4ff7b848c70e4e.eu-west-1.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-54-171-91-207.eu-west-1.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec2b7446590104cba2b91864c81f3065,SystemUUID:ec2b7446-5901-04cb-a2b9-1864c81f3065,BootID:a699ef08-97f3-45a9-bbef-6b226135305c,KernelVersion:5.15.81-flatcar,OSImage:Flatcar Container Linux by Kinvolk 3432.1.0 (Oklo),ContainerRuntimeVersion:containerd://1.6.10,KubeletVersion:v1.25.5,KubeProxyVersion:v1.25.5,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.25.5],SizeBytes:63291081,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:af7e3857d87770ddb40f5ea4f89b5a2709504ab1ee31f9ea4ab5823c045f2146 registry.k8s.io/e2e-test-images/agnhost:2.40],SizeBytes:51155161,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 registry.k8s.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:40764680,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:f0c5de192d832e7c1daa6580d4a62e8fa6fc8eabc0917ae4cb7ed4d15e95b59e registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.1],SizeBytes:29725845,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:68d396900aeaa072c1f27289485fdac29834045a6f3ffe369bf389d830ef572d registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.6],SizeBytes:20293261,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:8240918,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 registry.k8s.io/e2e-test-images/nginx:1.14-2],SizeBytes:6979041,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},ContainerImage{Names:[registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 17 22:32:47.524: INFO: Logging kubelet events for node i-05a4ff7b848c70e4e Jan 17 22:32:47.636: INFO: Logging pods the kubelet thinks is on node i-05a4ff7b848c70e4e Jan 17 22:32:47.762: INFO: simpletest.rc-kzbsm started at 2023-01-17 22:31:18 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:47.762: INFO: Container nginx ready: true, restart count 0 Jan 17 22:32:47.762: INFO: simpletest.rc-fj8bc started at 2023-01-17 22:31:19 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:47.762: INFO: Container nginx ready: true, restart count 0 Jan 17 22:32:47.762: INFO: simpletest.rc-49fh7 started at 2023-01-17 22:31:19 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:47.762: INFO: Container nginx ready: true, restart count 0 Jan 17 22:32:47.762: INFO: simpletest.rc-9rsrp started at 2023-01-17 22:31:21 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:47.762: INFO: Container nginx ready: true, restart count 0 Jan 17 22:32:47.762: INFO: simpletest.rc-6wdx2 started at 2023-01-17 22:31:21 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:47.762: INFO: Container nginx ready: true, restart count 0 Jan 17 22:32:47.762: INFO: simpletest.rc-xxx8d started at 2023-01-17 22:31:20 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:47.762: INFO: Container nginx ready: true, restart count 0 Jan 17 22:32:47.762: INFO: simpletest.rc-68n55 started at 2023-01-17 22:31:20 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:47.762: INFO: Container nginx ready: true, restart count 0 Jan 17 22:32:47.762: INFO: coredns-85d58b74c8-4xxft started at 2023-01-17 22:23:34 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:47.762: INFO: Container coredns ready: true, restart count 0 Jan 17 22:32:47.762: INFO: ebs-csi-node-h8cjv started at 2023-01-17 22:23:34 +0000 UTC (0+3 container statuses recorded) Jan 17 22:32:47.762: INFO: Container ebs-plugin ready: true, restart count 1 Jan 17 22:32:47.762: INFO: Container liveness-probe ready: true, restart count 0 Jan 17 22:32:47.762: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 17 22:32:47.762: INFO: webserver-deployment-845c8977d9-mw2p4 started at 2023-01-17 22:31:08 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:47.762: INFO: Container httpd ready: true, restart count 0 Jan 17 22:32:47.762: INFO: simpletest.rc-jlg5z started at 2023-01-17 22:31:18 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:47.762: INFO: Container nginx ready: true, restart count 0 Jan 17 22:32:47.762: INFO: simpletest.rc-hmxsd started at 2023-01-17 22:31:21 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:47.762: INFO: Container nginx ready: true, restart count 0 Jan 17 22:32:47.762: INFO: netserver-2 started at 2023-01-17 22:31:16 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:47.762: INFO: Container webserver ready: true, restart count 0 Jan 17 22:32:47.762: INFO: hostexec-i-05a4ff7b848c70e4e-drgjz started at 2023-01-17 22:31:06 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:47.762: INFO: Container agnhost-container ready: true, restart count 0 Jan 17 22:32:47.762: INFO: simpletest.rc-fzsz4 started at 2023-01-17 22:31:18 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:47.762: INFO: Container nginx ready: true, restart count 0 Jan 17 22:32:47.762: INFO: simpletest.rc-rbrjl started at 2023-01-17 22:31:18 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:47.762: INFO: Container nginx ready: true, restart count 0 Jan 17 22:32:47.762: INFO: webserver-deployment-845c8977d9-r6jl4 started at 2023-01-17 22:31:08 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:47.762: INFO: Container httpd ready: true, restart count 0 Jan 17 22:32:47.762: INFO: simpletest.rc-rxp6f started at 2023-01-17 22:31:19 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:47.762: INFO: Container nginx ready: true, restart count 0 Jan 17 22:32:47.762: INFO: simpletest.rc-zrrmj started at 2023-01-17 22:31:22 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:47.762: INFO: Container nginx ready: true, restart count 0 Jan 17 22:32:47.762: INFO: webserver-deployment-845c8977d9-dmclj started at 2023-01-17 22:31:46 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:47.762: INFO: Container httpd ready: true, restart count 0 Jan 17 22:32:47.762: INFO: webserver-deployment-69b7448995-dzntx started at 2023-01-17 22:31:46 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:47.762: INFO: Container httpd ready: false, restart count 0 Jan 17 22:32:47.762: INFO: coredns-autoscaler-5b9dc8bb99-96mpn started at 2023-01-17 22:23:34 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:47.762: INFO: Container autoscaler ready: true, restart count 0 Jan 17 22:32:47.762: INFO: netserver-2 started at 2023-01-17 22:31:07 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:47.762: INFO: Container webserver ready: true, restart count 0 Jan 17 22:32:47.762: INFO: webserver-deployment-69b7448995-rvfbk started at 2023-01-17 22:31:46 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:47.762: INFO: Container httpd ready: false, restart count 0 Jan 17 22:32:47.762: INFO: simpletest.rc-hbrsh started at 2023-01-17 22:31:20 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:47.762: INFO: Container nginx ready: true, restart count 0 Jan 17 22:32:47.762: INFO: simpletest.rc-lkgbr started at 2023-01-17 22:31:20 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:47.762: INFO: Container nginx ready: true, restart count 0 Jan 17 22:32:47.762: INFO: hostexec-i-05a4ff7b848c70e4e-w888m started at 2023-01-17 22:31:29 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:47.762: INFO: Container agnhost-container ready: true, restart count 0 Jan 17 22:32:47.762: INFO: pod-subpath-test-preprovisionedpv-ct78 started at 2023-01-17 22:31:22 +0000 UTC (1+1 container statuses recorded) Jan 17 22:32:47.762: INFO: Init container init-volume-preprovisionedpv-ct78 ready: true, restart count 0 Jan 17 22:32:47.762: INFO: Container test-container-subpath-preprovisionedpv-ct78 ready: false, restart count 0 Jan 17 22:32:47.762: INFO: simpletest.rc-h6rm8 started at 2023-01-17 22:31:19 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:47.762: INFO: Container nginx ready: true, restart count 0 Jan 17 22:32:47.762: INFO: simpletest.rc-5nn2c started at 2023-01-17 22:31:19 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:47.762: INFO: Container nginx ready: true, restart count 0 Jan 17 22:32:47.762: INFO: simpletest.rc-867n7 started at 2023-01-17 22:31:19 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:47.762: INFO: Container nginx ready: true, restart count 0 Jan 17 22:32:47.762: INFO: simpletest.rc-hvb6x started at 2023-01-17 22:31:21 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:47.762: INFO: Container nginx ready: true, restart count 0 Jan 17 22:32:47.762: INFO: simpletest.rc-btkzj started at 2023-01-17 22:31:21 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:47.762: INFO: Container nginx ready: true, restart count 0 Jan 17 22:32:47.762: INFO: simpletest.rc-z6nrv started at 2023-01-17 22:31:22 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:47.762: INFO: Container nginx ready: true, restart count 0 Jan 17 22:32:47.762: INFO: webserver-deployment-69b7448995-4vvsj started at 2023-01-17 22:31:43 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:47.762: INFO: Container httpd ready: false, restart count 0 Jan 17 22:32:47.762: INFO: webserver-deployment-845c8977d9-sgb4d started at 2023-01-17 22:31:46 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:47.762: INFO: Container httpd ready: false, restart count 0 Jan 17 22:32:47.762: INFO: webserver-deployment-845c8977d9-cqwfw started at 2023-01-17 22:31:46 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:47.762: INFO: Container httpd ready: false, restart count 0 Jan 17 22:32:47.762: INFO: kube-proxy-i-05a4ff7b848c70e4e started at 2023-01-17 22:23:24 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:47.762: INFO: Container kube-proxy ready: true, restart count 0 Jan 17 22:32:47.762: INFO: hostexec-i-05a4ff7b848c70e4e-scq5f started at 2023-01-17 22:31:08 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:47.762: INFO: Container agnhost-container ready: true, restart count 0 Jan 17 22:32:47.762: INFO: simpletest.rc-zplng started at 2023-01-17 22:31:18 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:47.762: INFO: Container nginx ready: true, restart count 0 Jan 17 22:32:47.762: INFO: simpletest.rc-k92b5 started at 2023-01-17 22:31:18 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:47.762: INFO: Container nginx ready: true, restart count 0 Jan 17 22:32:47.762: INFO: simpletest.rc-w826j started at 2023-01-17 22:31:19 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:47.762: INFO: Container nginx ready: true, restart count 0 Jan 17 22:32:47.762: INFO: simpletest.rc-956xj started at 2023-01-17 22:31:20 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:47.762: INFO: Container nginx ready: true, restart count 0 Jan 17 22:32:48.210: INFO: Latency metrics for node i-05a4ff7b848c70e4e Jan 17 22:32:48.210: INFO: Logging node info for node i-07023e4c3916cc727 Jan 17 22:32:48.316: INFO: Node Info: &Node{ObjectMeta:{i-07023e4c3916cc727 a3ff0af6-8c1e-426e-9c46-865046711c4f 4606 0 2023-01-17 22:23:34 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:eu-west-1 failure-domain.beta.kubernetes.io/zone:eu-west-1a kubernetes.io/arch:amd64 kubernetes.io/hostname:i-07023e4c3916cc727 kubernetes.io/os:linux node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:eu-west-1a topology.hostpath.csi/node:i-07023e4c3916cc727 topology.kubernetes.io/region:eu-west-1 topology.kubernetes.io/zone:eu-west-1a] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-volumemode-12":"i-07023e4c3916cc727","ebs.csi.aws.com":"i-07023e4c3916cc727"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-17 22:23:34 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}}} } {aws-cloud-controller-manager Update v1 2023-01-17 22:23:35 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:beta.kubernetes.io/instance-type":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kops-controller Update v1 2023-01-17 22:23:35 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:node-role.kubernetes.io/node":{}}}} } {aws-cloud-controller-manager Update v1 2023-01-17 22:23:41 +0000 UTC FieldsV1 {"f:status":{"f:addresses":{"k:{\"type\":\"ExternalDNS\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"ExternalIP\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"Hostname\"}":{"f:address":{}},"k:{\"type\":\"InternalDNS\"}":{".":{},"f:address":{},"f:type":{}}},"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2023-01-17 22:27:02 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.3.0/24\"":{}}}} } {kubelet Update v1 2023-01-17 22:32:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:allocatable":{"f:memory":{}},"f:capacity":{"f:memory":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{},"f:kernelVersion":{},"f:osImage":{}}}} status}]},Spec:NodeSpec{PodCIDR:100.96.3.0/24,DoNotUseExternalID:,ProviderID:aws:///eu-west-1a/i-07023e4c3916cc727,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{47441653760 0} {<nil>} 46329740Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4054806528 0} {<nil>} 3959772Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{42697488314 0} {<nil>} 42697488314 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3949948928 0} {<nil>} 3857372Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-17 22:23:41 +0000 UTC,LastTransitionTime:2023-01-17 22:23:41 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-17 22:32:39 +0000 UTC,LastTransitionTime:2023-01-17 22:23:24 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-17 22:32:39 +0000 UTC,LastTransitionTime:2023-01-17 22:23:24 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-17 22:32:39 +0000 UTC,LastTransitionTime:2023-01-17 22:23:24 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-17 22:32:39 +0000 UTC,LastTransitionTime:2023-01-17 22:27:02 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.35.9,},NodeAddress{Type:ExternalIP,Address:34.253.197.55,},NodeAddress{Type:InternalDNS,Address:i-07023e4c3916cc727.eu-west-1.compute.internal,},NodeAddress{Type:Hostname,Address:i-07023e4c3916cc727.eu-west-1.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-34-253-197-55.eu-west-1.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec23921812a34c08be6592c8614a95a4,SystemUUID:ec239218-12a3-4c08-be65-92c8614a95a4,BootID:39a5df56-82ab-4cc6-bbc8-670bdfaa645b,KernelVersion:5.15.86-flatcar,OSImage:Flatcar Container Linux by Kinvolk 3446.1.0 (Oklo),ContainerRuntimeVersion:containerd://1.6.10,KubeletVersion:v1.25.5,KubeProxyVersion:v1.25.5,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.25.5],SizeBytes:63291081,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:af7e3857d87770ddb40f5ea4f89b5a2709504ab1ee31f9ea4ab5823c045f2146 registry.k8s.io/e2e-test-images/agnhost:2.40],SizeBytes:51155161,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 registry.k8s.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:40764680,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:f0c5de192d832e7c1daa6580d4a62e8fa6fc8eabc0917ae4cb7ed4d15e95b59e registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.1],SizeBytes:29725845,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:122bfb8c1edabb3c0edd63f06523e6940d958d19b3957dc7b1d6f81e9f1f6119 registry.k8s.io/sig-storage/csi-provisioner:v3.1.0],SizeBytes:23345856,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:9ebbf9f023e7b41ccee3d52afe39a89e3ddacdbb69269d583abfc25847cfd9e4 registry.k8s.io/sig-storage/csi-resizer:v1.4.0],SizeBytes:22381475,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:89e900a160a986a1a7a4eba7f5259e510398fa87ca9b8a729e7dec59e04c7709 registry.k8s.io/sig-storage/csi-snapshotter:v5.0.1],SizeBytes:22163966,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:8b9c313c05f54fb04f8d430896f5f5904b6cb157df261501b29adc04d2b2dc7b registry.k8s.io/sig-storage/csi-attacher:v3.4.0],SizeBytes:22085298,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:6029c252dae6178c99b580de72d7776158edbc81be0de15cedc4152a3acfed18 registry.k8s.io/sig-storage/hostpathplugin:v1.7.3],SizeBytes:15224494,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:4fd21f36075b44d1a423dfb262ad79202ce54e95f5cbc4622a6c1c38ab287ad6 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.0],SizeBytes:9132637,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:8240918,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 registry.k8s.io/e2e-test-images/nginx:1.14-2],SizeBytes:6979041,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},ContainerImage{Names:[registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 17 22:32:48.316: INFO: Logging kubelet events for node i-07023e4c3916cc727 Jan 17 22:32:48.426: INFO: Logging pods the kubelet thinks is on node i-07023e4c3916cc727 Jan 17 22:32:48.544: INFO: simpletest.rc-9hkmg started at 2023-01-17 22:31:21 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:48.544: INFO: Container nginx ready: true, restart count 0 Jan 17 22:32:48.544: INFO: webserver-deployment-69b7448995-jjz56 started at 2023-01-17 22:31:46 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:48.544: INFO: Container httpd ready: false, restart count 0 Jan 17 22:32:48.544: INFO: host-test-container-pod started at 2023-01-17 22:31:50 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:48.544: INFO: Container agnhost-container ready: true, restart count 0 Jan 17 22:32:48.544: INFO: webserver-deployment-845c8977d9-67t7j started at 2023-01-17 22:31:08 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:48.544: INFO: Container httpd ready: true, restart count 0 Jan 17 22:32:48.544: INFO: test-container-pod started at 2023-01-17 22:31:50 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:48.544: INFO: Container webserver ready: true, restart count 0 Jan 17 22:32:48.544: INFO: simpletest.rc-865zr started at 2023-01-17 22:31:18 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:48.544: INFO: Container nginx ready: true, restart count 0 Jan 17 22:32:48.544: INFO: simpletest.rc-f2wrm started at 2023-01-17 22:31:19 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:48.544: INFO: Container nginx ready: true, restart count 0 Jan 17 22:32:48.544: INFO: webserver-deployment-845c8977d9-lx7rw started at 2023-01-17 22:31:46 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:48.544: INFO: Container httpd ready: false, restart count 0 Jan 17 22:32:48.544: INFO: webserver-deployment-845c8977d9-6jlvv started at 2023-01-17 22:31:46 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:48.544: INFO: Container httpd ready: false, restart count 0 Jan 17 22:32:48.544: INFO: simpletest.rc-hp87b started at 2023-01-17 22:31:18 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:48.544: INFO: Container nginx ready: true, restart count 0 Jan 17 22:32:48.544: INFO: simpletest.rc-bdz2r started at 2023-01-17 22:31:19 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:48.544: INFO: Container nginx ready: true, restart count 0 Jan 17 22:32:48.544: INFO: simpletest.rc-5g2dq started at 2023-01-17 22:31:19 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:48.544: INFO: Container nginx ready: true, restart count 0 Jan 17 22:32:48.544: INFO: webserver-deployment-69b7448995-dvp6t started at 2023-01-17 22:31:43 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:48.544: INFO: Container httpd ready: false, restart count 0 Jan 17 22:32:48.544: INFO: deployment-shared-unset-79c9978db8-6sfxk started at 2023-01-17 22:31:06 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:48.544: INFO: Container nginx ready: false, restart count 0 Jan 17 22:32:48.544: INFO: simpletest.rc-g68mw started at 2023-01-17 22:31:18 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:48.544: INFO: Container nginx ready: true, restart count 0 Jan 17 22:32:48.544: INFO: simpletest.rc-sb9m9 started at 2023-01-17 22:31:22 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:48.544: INFO: Container nginx ready: true, restart count 0 Jan 17 22:32:48.544: INFO: simpletest.rc-c6pl8 started at 2023-01-17 22:31:19 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:48.544: INFO: Container nginx ready: true, restart count 0 Jan 17 22:32:48.544: INFO: simpletest.rc-szlxc started at 2023-01-17 22:31:20 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:48.544: INFO: Container nginx ready: true, restart count 0 Jan 17 22:32:48.544: INFO: simpletest.rc-nb4s4 started at 2023-01-17 22:31:20 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:48.544: INFO: Container nginx ready: true, restart count 0 Jan 17 22:32:48.544: INFO: webserver-deployment-845c8977d9-ggh76 started at 2023-01-17 22:31:46 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:48.544: INFO: Container httpd ready: true, restart count 0 Jan 17 22:32:48.544: INFO: simpletest.rc-7rhhc started at 2023-01-17 22:31:18 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:48.544: INFO: Container nginx ready: true, restart count 0 Jan 17 22:32:48.544: INFO: simpletest.rc-nx7nc started at 2023-01-17 22:31:18 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:48.544: INFO: Container nginx ready: true, restart count 0 Jan 17 22:32:48.544: INFO: simpletest.rc-dsbhs started at 2023-01-17 22:31:21 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:48.544: INFO: Container nginx ready: true, restart count 0 Jan 17 22:32:48.544: INFO: simpletest.rc-b5ktr started at 2023-01-17 22:31:21 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:48.544: INFO: Container nginx ready: true, restart count 0 Jan 17 22:32:48.544: INFO: webserver-deployment-69b7448995-2w4qz started at 2023-01-17 22:31:46 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:48.544: INFO: Container httpd ready: false, restart count 0 Jan 17 22:32:48.544: INFO: simpletest.rc-q9qr5 started at 2023-01-17 22:31:18 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:48.544: INFO: Container nginx ready: true, restart count 0 Jan 17 22:32:48.544: INFO: simpletest.rc-76wpc started at 2023-01-17 22:31:20 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:48.544: INFO: Container nginx ready: true, restart count 0 Jan 17 22:32:48.544: INFO: simpletest.rc-x29fd started at 2023-01-17 22:31:20 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:48.544: INFO: Container nginx ready: true, restart count 0 Jan 17 22:32:48.544: INFO: agnhost started at 2023-01-17 22:31:34 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:48.544: INFO: Container agnhost ready: true, restart count 0 Jan 17 22:32:48.544: INFO: webserver-deployment-845c8977d9-sm5gg started at 2023-01-17 22:31:46 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:48.544: INFO: Container httpd ready: true, restart count 0 Jan 17 22:32:48.544: INFO: simpletest.rc-zlk5h started at 2023-01-17 22:31:19 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:48.544: INFO: Container nginx ready: true, restart count 0 Jan 17 22:32:48.544: INFO: simpletest.rc-chvmn started at 2023-01-17 22:31:19 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:48.544: INFO: Container nginx ready: true, restart count 0 Jan 17 22:32:48.544: INFO: all-succeed-jhlhk started at 2023-01-17 22:31:29 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:48.544: INFO: Container c ready: false, restart count 0 Jan 17 22:32:48.544: INFO: kube-proxy-i-07023e4c3916cc727 started at 2023-01-17 22:23:25 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:48.544: INFO: Container kube-proxy ready: true, restart count 1 Jan 17 22:32:48.544: INFO: ebs-csi-node-tmp4f started at 2023-01-17 22:23:35 +0000 UTC (0+3 container statuses recorded) Jan 17 22:32:48.544: INFO: Container ebs-plugin ready: true, restart count 2 Jan 17 22:32:48.544: INFO: Container liveness-probe ready: true, restart count 1 Jan 17 22:32:48.544: INFO: Container node-driver-registrar ready: true, restart count 1 Jan 17 22:32:48.544: INFO: netserver-3 started at 2023-01-17 22:31:17 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:48.544: INFO: Container webserver ready: true, restart count 0 Jan 17 22:32:48.544: INFO: netserver-3 started at 2023-01-17 22:31:07 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:48.544: INFO: Container webserver ready: true, restart count 0 Jan 17 22:32:48.544: INFO: simpletest.rc-v476r started at 2023-01-17 22:31:18 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:48.544: INFO: Container nginx ready: true, restart count 0 Jan 17 22:32:48.544: INFO: simpletest.rc-s946w started at 2023-01-17 22:31:21 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:48.544: INFO: Container nginx ready: true, restart count 0 Jan 17 22:32:48.544: INFO: csi-hostpathplugin-0 started at 2023-01-17 22:31:22 +0000 UTC (0+7 container statuses recorded) Jan 17 22:32:48.544: INFO: Container csi-attacher ready: true, restart count 0 Jan 17 22:32:48.544: INFO: Container csi-provisioner ready: true, restart count 0 Jan 17 22:32:48.544: INFO: Container csi-resizer ready: true, restart count 0 Jan 17 22:32:48.544: INFO: Container csi-snapshotter ready: true, restart count 0 Jan 17 22:32:48.544: INFO: Container hostpath ready: true, restart count 0 Jan 17 22:32:48.544: INFO: Container liveness-probe ready: true, restart count 0 Jan 17 22:32:48.544: INFO: Container node-driver-registrar ready: true, restart count 1 Jan 17 22:32:48.544: INFO: webserver-deployment-69b7448995-h448q started at 2023-01-17 22:31:46 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:48.544: INFO: Container httpd ready: false, restart count 0 Jan 17 22:32:48.544: INFO: simpletest.rc-j59fh started at 2023-01-17 22:31:19 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:48.544: INFO: Container nginx ready: true, restart count 0 Jan 17 22:32:48.544: INFO: simpletest.rc-f5z59 started at 2023-01-17 22:31:21 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:48.544: INFO: Container nginx ready: true, restart count 0 Jan 17 22:32:48.544: INFO: simpletest.rc-bndt8 started at 2023-01-17 22:31:22 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:48.544: INFO: Container nginx ready: true, restart count 0 Jan 17 22:32:48.544: INFO: simpletest.rc-bf5pk started at 2023-01-17 22:31:22 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:48.544: INFO: Container nginx ready: true, restart count 0 Jan 17 22:32:48.544: INFO: pod-init-af4069c4-2e52-4aa6-a660-a5c56787265e started at 2023-01-17 22:31:42 +0000 UTC (2+1 container statuses recorded) Jan 17 22:32:48.544: INFO: Init container init1 ready: true, restart count 0 Jan 17 22:32:48.544: INFO: Init container init2 ready: true, restart count 0 Jan 17 22:32:48.544: INFO: Container run1 ready: false, restart count 0 Jan 17 22:32:48.998: INFO: Latency metrics for node i-07023e4c3916cc727 Jan 17 22:32:48.998: INFO: Logging node info for node i-0f4738b0932ab9299 Jan 17 22:32:49.105: INFO: Node Info: &Node{ObjectMeta:{i-0f4738b0932ab9299 93dbf5f1-6205-48f8-b119-5372216e3b73 4587 0 2023-01-17 22:22:03 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:c5.large beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:eu-west-1 failure-domain.beta.kubernetes.io/zone:eu-west-1a kops.k8s.io/kops-controller-pki: kubernetes.io/arch:amd64 kubernetes.io/hostname:i-0f4738b0932ab9299 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node.kubernetes.io/exclude-from-external-load-balancers: node.kubernetes.io/instance-type:c5.large topology.ebs.csi.aws.com/zone:eu-west-1a topology.kubernetes.io/region:eu-west-1 topology.kubernetes.io/zone:eu-west-1a] map[csi.volume.kubernetes.io/nodeid:{"ebs.csi.aws.com":"i-0f4738b0932ab9299"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-17 22:22:03 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}}} } {protokube Update v1 2023-01-17 22:22:23 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kops.k8s.io/kops-controller-pki":{},"f:node-role.kubernetes.io/control-plane":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}} } {kube-controller-manager Update v1 2023-01-17 22:22:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.0.0/24\"":{}}}} } {aws-cloud-controller-manager Update v1 2023-01-17 22:22:40 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:beta.kubernetes.io/instance-type":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:taints":{}}} } {aws-cloud-controller-manager Update v1 2023-01-17 22:22:40 +0000 UTC FieldsV1 {"f:status":{"f:addresses":{"k:{\"type\":\"ExternalDNS\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"ExternalIP\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"Hostname\"}":{"f:address":{}},"k:{\"type\":\"InternalDNS\"}":{".":{},"f:address":{},"f:type":{}}},"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2023-01-17 22:32:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{}}},"f:status":{"f:allocatable":{"f:memory":{}},"f:capacity":{"f:memory":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{},"f:kernelVersion":{},"f:osImage":{}}}} status}]},Spec:NodeSpec{PodCIDR:100.96.0.0/24,DoNotUseExternalID:,ProviderID:aws:///eu-west-1a/i-0f4738b0932ab9299,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/control-plane,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[100.96.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{47441653760 0} {<nil>} 46329740Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3895427072 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{42697488314 0} {<nil>} 42697488314 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3790569472 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-17 22:22:40 +0000 UTC,LastTransitionTime:2023-01-17 22:22:40 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-17 22:32:37 +0000 UTC,LastTransitionTime:2023-01-17 22:21:59 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-17 22:32:37 +0000 UTC,LastTransitionTime:2023-01-17 22:21:59 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-17 22:32:37 +0000 UTC,LastTransitionTime:2023-01-17 22:21:59 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-17 22:32:37 +0000 UTC,LastTransitionTime:2023-01-17 22:32:37 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.37.63,},NodeAddress{Type:ExternalIP,Address:54.78.31.51,},NodeAddress{Type:InternalDNS,Address:i-0f4738b0932ab9299.eu-west-1.compute.internal,},NodeAddress{Type:Hostname,Address:i-0f4738b0932ab9299.eu-west-1.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-54-78-31-51.eu-west-1.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec2512be5df24a68030b243f0f25f7cc,SystemUUID:ec2512be-5df2-4a68-030b-243f0f25f7cc,BootID:88d5eff3-6b58-4e67-9934-581cdce3fe94,KernelVersion:5.15.86-flatcar,OSImage:Flatcar Container Linux by Kinvolk 3446.1.0 (Oklo),ContainerRuntimeVersion:containerd://1.6.10,KubeletVersion:v1.25.5,KubeProxyVersion:v1.25.5,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/etcdadm/etcd-manager@sha256:66a453db625abb268f4b3bbefc5a34a171d81e6e8796cecca54cfd71775c77c4 registry.k8s.io/etcdadm/etcd-manager:v3.0.20221209],SizeBytes:231502799,},ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.25.5],SizeBytes:129100243,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.25.5],SizeBytes:118446393,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.25.5],SizeBytes:63291081,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.25.5],SizeBytes:51931448,},ContainerImage{Names:[registry.k8s.io/kops/kops-controller:1.26.0-beta.2],SizeBytes:43191755,},ContainerImage{Names:[registry.k8s.io/kops/dns-controller:1.26.0-beta.2],SizeBytes:42821707,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:f0c5de192d832e7c1daa6580d4a62e8fa6fc8eabc0917ae4cb7ed4d15e95b59e registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.1],SizeBytes:29725845,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:122bfb8c1edabb3c0edd63f06523e6940d958d19b3957dc7b1d6f81e9f1f6119 registry.k8s.io/sig-storage/csi-provisioner:v3.1.0],SizeBytes:23345856,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:9ebbf9f023e7b41ccee3d52afe39a89e3ddacdbb69269d583abfc25847cfd9e4 registry.k8s.io/sig-storage/csi-resizer:v1.4.0],SizeBytes:22381475,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:8b9c313c05f54fb04f8d430896f5f5904b6cb157df261501b29adc04d2b2dc7b registry.k8s.io/sig-storage/csi-attacher:v3.4.0],SizeBytes:22085298,},ContainerImage{Names:[registry.k8s.io/provider-aws/cloud-controller-manager@sha256:dcccdfba225e93ba2060a4c0b9072b50b0a564354c37bba6ed3ce89c326db58c registry.k8s.io/provider-aws/cloud-controller-manager:v1.25.2],SizeBytes:18280697,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:8240918,},ContainerImage{Names:[registry.k8s.io/kops/kube-apiserver-healthcheck:1.26.0-beta.2],SizeBytes:4965792,},ContainerImage{Names:[registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 17 22:32:49.105: INFO: Logging kubelet events for node i-0f4738b0932ab9299 Jan 17 22:32:49.216: INFO: Logging pods the kubelet thinks is on node i-0f4738b0932ab9299 Jan 17 22:32:49.346: INFO: kube-scheduler-i-0f4738b0932ab9299 started at 2023-01-17 22:32:29 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:49.346: INFO: Container kube-scheduler ready: true, restart count 1 Jan 17 22:32:49.346: INFO: dns-controller-56d4f686f6-wgj8p started at 2023-01-17 22:22:36 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:49.346: INFO: Container dns-controller ready: true, restart count 1 Jan 17 22:32:49.346: INFO: kops-controller-m2qmj started at 2023-01-17 22:22:36 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:49.346: INFO: Container kops-controller ready: true, restart count 2 Jan 17 22:32:49.346: INFO: ebs-csi-node-4zmsj started at 2023-01-17 22:22:36 +0000 UTC (0+3 container statuses recorded) Jan 17 22:32:49.346: INFO: Container ebs-plugin ready: true, restart count 1 Jan 17 22:32:49.346: INFO: Container liveness-probe ready: true, restart count 1 Jan 17 22:32:49.346: INFO: Container node-driver-registrar ready: true, restart count 1 Jan 17 22:32:49.346: INFO: aws-cloud-controller-manager-gmgnz started at 2023-01-17 22:22:36 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:49.346: INFO: Container aws-cloud-controller-manager ready: true, restart count 2 Jan 17 22:32:49.346: INFO: etcd-manager-main-i-0f4738b0932ab9299 started at 2023-01-17 22:32:29 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:49.346: INFO: Container etcd-manager ready: true, restart count 1 Jan 17 22:32:49.346: INFO: kube-apiserver-i-0f4738b0932ab9299 started at 2023-01-17 22:32:29 +0000 UTC (0+2 container statuses recorded) Jan 17 22:32:49.346: INFO: Container healthcheck ready: true, restart count 1 Jan 17 22:32:49.346: INFO: Container kube-apiserver ready: true, restart count 2 Jan 17 22:32:49.346: INFO: kube-proxy-i-0f4738b0932ab9299 started at 2023-01-17 22:32:29 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:49.346: INFO: Container kube-proxy ready: true, restart count 1 Jan 17 22:32:49.346: INFO: ebs-csi-controller-696c7b9c79-9fsrb started at 2023-01-17 22:22:36 +0000 UTC (0+5 container statuses recorded) Jan 17 22:32:49.346: INFO: Container csi-attacher ready: true, restart count 2 Jan 17 22:32:49.346: INFO: Container csi-provisioner ready: true, restart count 2 Jan 17 22:32:49.346: INFO: Container csi-resizer ready: true, restart count 1 Jan 17 22:32:49.346: INFO: Container ebs-plugin ready: false, restart count 1 Jan 17 22:32:49.346: INFO: Container liveness-probe ready: true, restart count 1 Jan 17 22:32:49.346: INFO: etcd-manager-events-i-0f4738b0932ab9299 started at 2023-01-17 22:32:29 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:49.346: INFO: Container etcd-manager ready: true, restart count 1 Jan 17 22:32:49.346: INFO: kube-controller-manager-i-0f4738b0932ab9299 started at 2023-01-17 22:21:29 +0000 UTC (0+1 container statuses recorded) Jan 17 22:32:49.346: INFO: Container kube-controller-manager ready: false, restart count 3 Jan 17 22:32:49.730: INFO: Latency metrics for node i-0f4738b0932ab9299 Jan 17 22:32:49.730: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "init-container-6533" for this suite. �[38;5;243m01/17/23 22:32:49.837�[0m
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-node\]\sProbing\scontainer\sshould\s\*not\*\sbe\srestarted\swith\sa\sexec\s\"cat\s\/tmp\/health\"\sliveness\sprobe\s\[NodeConformance\]\s\[Conformance\]$'
test/e2e/common/node/container_probe.go:910 k8s.io/kubernetes/test/e2e/common/node.RunLivenessTest(0xc000781a20, 0xc001670c00, 0x0, 0x37e11d6000?) test/e2e/common/node/container_probe.go:910 +0x96b k8s.io/kubernetes/test/e2e/common/node.glob..func2.5() test/e2e/common/node/container_probe.go:157 +0x165from junit_01.xml
{"msg":"FAILED [sig-node] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","completed":0,"skipped":3,"failed":1,"failures":["[sig-node] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]"]} [BeforeEach] [sig-node] Probing container test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/17/23 22:31:05.622�[0m Jan 17 22:31:05.622: INFO: >>> kubeConfig: /root/.kube/config �[1mSTEP:�[0m Building a namespace api object, basename container-probe �[38;5;243m01/17/23 22:31:05.623�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/17/23 22:31:05.943�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/17/23 22:31:06.152�[0m [BeforeEach] [sig-node] Probing container test/e2e/common/node/container_probe.go:59 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] test/e2e/common/node/container_probe.go:148 �[1mSTEP:�[0m Creating pod busybox-0eeae1b6-87b2-4f12-8590-0524d597bfe3 in namespace container-probe-8034 �[38;5;243m01/17/23 22:31:06.373�[0m Jan 17 22:31:06.483: INFO: Waiting up to 5m0s for pod "busybox-0eeae1b6-87b2-4f12-8590-0524d597bfe3" in namespace "container-probe-8034" to be "not pending" Jan 17 22:31:06.597: INFO: Pod "busybox-0eeae1b6-87b2-4f12-8590-0524d597bfe3": Phase="Pending", Reason="", readiness=false. Elapsed: 114.578678ms Jan 17 22:31:08.704: INFO: Pod "busybox-0eeae1b6-87b2-4f12-8590-0524d597bfe3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.220775042s Jan 17 22:31:10.736: INFO: Pod "busybox-0eeae1b6-87b2-4f12-8590-0524d597bfe3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.252909701s Jan 17 22:31:12.721: INFO: Pod "busybox-0eeae1b6-87b2-4f12-8590-0524d597bfe3": Phase="Running", Reason="", readiness=true. Elapsed: 6.238127414s Jan 17 22:31:12.721: INFO: Pod "busybox-0eeae1b6-87b2-4f12-8590-0524d597bfe3" satisfied condition "not pending" Jan 17 22:31:12.721: INFO: Started pod busybox-0eeae1b6-87b2-4f12-8590-0524d597bfe3 in namespace container-probe-8034 �[1mSTEP:�[0m checking the pod's current state and verifying that restartCount is present �[38;5;243m01/17/23 22:31:12.721�[0m Jan 17 22:31:12.828: INFO: Initial restart count of pod busybox-0eeae1b6-87b2-4f12-8590-0524d597bfe3 is 0 Jan 17 22:32:12.369: INFO: Unexpected error: getting pod : <*rest.wrapPreviousError | 0xc0003c1be0>: { currentErr: <*url.Error | 0xc0027ff8f0>{ Op: "Get", URL: "https://api.e2e-e2e-kops-grid-kubenet-flatcar-k25-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/container-probe-8034/pods/busybox-0eeae1b6-87b2-4f12-8590-0524d597bfe3", Err: <*net.OpError | 0xc0010a3090>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0027ff8c0>{IP: [54, 78, 31, 51], Port: 443, Zone: ""}, Err: <*os.SyscallError | 0xc0003c1b60>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, }, previousError: <*errors.errorString | 0xc0000c6130>{s: "unexpected EOF"}, } Jan 17 22:32:12.369: FAIL: getting pod : Get "https://api.e2e-e2e-kops-grid-kubenet-flatcar-k25-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/container-probe-8034/pods/busybox-0eeae1b6-87b2-4f12-8590-0524d597bfe3": dial tcp 54.78.31.51:443: connect: connection refused - error from a previous attempt: unexpected EOF Full Stack Trace k8s.io/kubernetes/test/e2e/common/node.RunLivenessTest(0xc000781a20, 0xc001670c00, 0x0, 0x37e11d6000?) test/e2e/common/node/container_probe.go:910 +0x96b k8s.io/kubernetes/test/e2e/common/node.glob..func2.5() test/e2e/common/node/container_probe.go:157 +0x165 �[1mSTEP:�[0m deleting the pod �[38;5;243m01/17/23 22:32:12.369�[0m [AfterEach] [sig-node] Probing container test/e2e/framework/framework.go:187 �[1mSTEP:�[0m Collecting events from namespace "container-probe-8034". �[38;5;243m01/17/23 22:32:12.369�[0m Jan 17 22:32:12.489: INFO: Unexpected error: failed to list events in namespace "container-probe-8034": <*url.Error | 0xc0029aadb0>: { Op: "Get", URL: "https://api.e2e-e2e-kops-grid-kubenet-flatcar-k25-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/container-probe-8034/events", Err: <*net.OpError | 0xc0029c0c30>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc00297ae70>{IP: [54, 78, 31, 51], Port: 443, Zone: ""}, Err: <*os.SyscallError | 0xc001ae5940>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Jan 17 22:32:12.489: FAIL: failed to list events in namespace "container-probe-8034": Get "https://api.e2e-e2e-kops-grid-kubenet-flatcar-k25-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/container-probe-8034/events": dial tcp 54.78.31.51:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework.dumpEventsInNamespace(0xc002a6f590, {0xc00254c150, 0x14}) test/e2e/framework/util.go:901 +0x191 k8s.io/kubernetes/test/e2e/framework.DumpAllNamespaceInfo({0x7ca2818, 0xc0003cd800}, {0xc00254c150, 0x14}) test/e2e/framework/util.go:919 +0x8d k8s.io/kubernetes/test/e2e/framework.NewFramework.func1(0xc000781a20, 0x1?) test/e2e/framework/framework.go:181 +0x8b k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc000781a20) test/e2e/framework/framework.go:435 +0x21d �[1mSTEP:�[0m Destroying namespace "container-probe-8034" for this suite. �[38;5;243m01/17/23 22:32:12.489�[0m Jan 17 22:32:12.606: FAIL: Couldn't delete ns: "container-probe-8034": Delete "https://api.e2e-e2e-kops-grid-kubenet-flatcar-k25-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/container-probe-8034": dial tcp 54.78.31.51:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://api.e2e-e2e-kops-grid-kubenet-flatcar-k25-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/container-probe-8034", Err:(*net.OpError)(0xc0010a3770)}) Full Stack Trace panic({0x6ea2520, 0xc0029ea680}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1() test/e2e/framework/ginkgowrapper/wrapper.go:73 +0x7d panic({0x6ea4740, 0xc00044a070}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail({0xc002720360, 0x106}, {0xc002a6f048?, 0x735bfcc?, 0xc002a6f068?}) test/e2e/framework/ginkgowrapper/wrapper.go:77 +0x197 k8s.io/kubernetes/test/e2e/framework.Fail({0xc001aff300, 0xf1}, {0xc002a6f0e0?, 0xc0021238c0?, 0xc002a6f108?}) test/e2e/framework/log.go:63 +0x145 k8s.io/kubernetes/test/e2e/framework.ExpectNoErrorWithOffset(0x1, {0x7c34da0, 0xc0029aadb0}, {0xc001ae5980?, 0x0?, 0x0?}) test/e2e/framework/expect.go:76 +0x267 k8s.io/kubernetes/test/e2e/framework.ExpectNoError(...) test/e2e/framework/expect.go:43 k8s.io/kubernetes/test/e2e/framework.dumpEventsInNamespace(0xc002a6f590, {0xc00254c150, 0x14}) test/e2e/framework/util.go:901 +0x191 k8s.io/kubernetes/test/e2e/framework.DumpAllNamespaceInfo({0x7ca2818, 0xc0003cd800}, {0xc00254c150, 0x14}) test/e2e/framework/util.go:919 +0x8d k8s.io/kubernetes/test/e2e/framework.NewFramework.func1(0xc000781a20, 0x1?) test/e2e/framework/framework.go:181 +0x8b k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc000781a20) test/e2e/framework/framework.go:435 +0x21d
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-node\]\sProbing\scontainer\sshould\sbe\srestarted\sstartup\sprobe\sfails$'
test/e2e/common/node/container_probe.go:910 k8s.io/kubernetes/test/e2e/common/node.RunLivenessTest(0xc000ad8dc0, 0xc001cfdc00, 0x1, 0x37e11d6000?) test/e2e/common/node/container_probe.go:910 +0x96b k8s.io/kubernetes/test/e2e/common/node.glob..func2.15() test/e2e/common/node/container_probe.go:338 +0x1c5from junit_01.xml
{"msg":"FAILED [sig-node] Probing container should be restarted startup probe fails","completed":0,"skipped":6,"failed":1,"failures":["[sig-node] Probing container should be restarted startup probe fails"]} [BeforeEach] [sig-node] Probing container test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/17/23 22:31:05.691�[0m Jan 17 22:31:05.691: INFO: >>> kubeConfig: /root/.kube/config �[1mSTEP:�[0m Building a namespace api object, basename container-probe �[38;5;243m01/17/23 22:31:05.692�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/17/23 22:31:06.028�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/17/23 22:31:06.241�[0m [BeforeEach] [sig-node] Probing container test/e2e/common/node/container_probe.go:59 [It] should be restarted startup probe fails test/e2e/common/node/container_probe.go:317 �[1mSTEP:�[0m Creating pod startup-c485c102-c6f2-4c81-bcc6-fd69419d3aff in namespace container-probe-7154 �[38;5;243m01/17/23 22:31:06.455�[0m Jan 17 22:31:06.633: INFO: Waiting up to 5m0s for pod "startup-c485c102-c6f2-4c81-bcc6-fd69419d3aff" in namespace "container-probe-7154" to be "not pending" Jan 17 22:31:06.768: INFO: Pod "startup-c485c102-c6f2-4c81-bcc6-fd69419d3aff": Phase="Pending", Reason="", readiness=false. Elapsed: 135.143755ms Jan 17 22:31:08.878: INFO: Pod "startup-c485c102-c6f2-4c81-bcc6-fd69419d3aff": Phase="Pending", Reason="", readiness=false. Elapsed: 2.245501029s Jan 17 22:31:10.877: INFO: Pod "startup-c485c102-c6f2-4c81-bcc6-fd69419d3aff": Phase="Pending", Reason="", readiness=false. Elapsed: 4.243787732s Jan 17 22:31:12.877: INFO: Pod "startup-c485c102-c6f2-4c81-bcc6-fd69419d3aff": Phase="Pending", Reason="", readiness=false. Elapsed: 6.244256718s Jan 17 22:31:14.885: INFO: Pod "startup-c485c102-c6f2-4c81-bcc6-fd69419d3aff": Phase="Running", Reason="", readiness=false. Elapsed: 8.252207185s Jan 17 22:31:14.885: INFO: Pod "startup-c485c102-c6f2-4c81-bcc6-fd69419d3aff" satisfied condition "not pending" Jan 17 22:31:14.885: INFO: Started pod startup-c485c102-c6f2-4c81-bcc6-fd69419d3aff in namespace container-probe-7154 �[1mSTEP:�[0m checking the pod's current state and verifying that restartCount is present �[38;5;243m01/17/23 22:31:14.885�[0m Jan 17 22:31:15.018: INFO: Initial restart count of pod startup-c485c102-c6f2-4c81-bcc6-fd69419d3aff is 0 Jan 17 22:32:12.351: INFO: Unexpected error: getting pod : <*rest.wrapPreviousError | 0xc002d9cd80>: { currentErr: <*url.Error | 0xc00374f710>{ Op: "Get", URL: "https://api.e2e-e2e-kops-grid-kubenet-flatcar-k25-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/container-probe-7154/pods/startup-c485c102-c6f2-4c81-bcc6-fd69419d3aff", Err: <*net.OpError | 0xc001545680>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0005ea690>{IP: [54, 78, 31, 51], Port: 443, Zone: ""}, Err: <*os.SyscallError | 0xc002d9cd40>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, }, previousError: <*errors.errorString | 0xc0000c6130>{s: "unexpected EOF"}, } Jan 17 22:32:12.351: FAIL: getting pod : Get "https://api.e2e-e2e-kops-grid-kubenet-flatcar-k25-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/container-probe-7154/pods/startup-c485c102-c6f2-4c81-bcc6-fd69419d3aff": dial tcp 54.78.31.51:443: connect: connection refused - error from a previous attempt: unexpected EOF Full Stack Trace k8s.io/kubernetes/test/e2e/common/node.RunLivenessTest(0xc000ad8dc0, 0xc001cfdc00, 0x1, 0x37e11d6000?) test/e2e/common/node/container_probe.go:910 +0x96b k8s.io/kubernetes/test/e2e/common/node.glob..func2.15() test/e2e/common/node/container_probe.go:338 +0x1c5 �[1mSTEP:�[0m deleting the pod �[38;5;243m01/17/23 22:32:12.351�[0m [AfterEach] [sig-node] Probing container test/e2e/framework/framework.go:187 �[1mSTEP:�[0m Collecting events from namespace "container-probe-7154". �[38;5;243m01/17/23 22:32:12.351�[0m Jan 17 22:32:12.466: INFO: Unexpected error: failed to list events in namespace "container-probe-7154": <*url.Error | 0xc00160c2d0>: { Op: "Get", URL: "https://api.e2e-e2e-kops-grid-kubenet-flatcar-k25-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/container-probe-7154/events", Err: <*net.OpError | 0xc001545ae0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0007c2f90>{IP: [54, 78, 31, 51], Port: 443, Zone: ""}, Err: <*os.SyscallError | 0xc002d9d3a0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Jan 17 22:32:12.466: FAIL: failed to list events in namespace "container-probe-7154": Get "https://api.e2e-e2e-kops-grid-kubenet-flatcar-k25-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/container-probe-7154/events": dial tcp 54.78.31.51:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework.dumpEventsInNamespace(0xc003663590, {0xc000af6cd8, 0x14}) test/e2e/framework/util.go:901 +0x191 k8s.io/kubernetes/test/e2e/framework.DumpAllNamespaceInfo({0x7ca2818, 0xc000a1e180}, {0xc000af6cd8, 0x14}) test/e2e/framework/util.go:919 +0x8d k8s.io/kubernetes/test/e2e/framework.NewFramework.func1(0xc000ad8dc0, 0x1?) test/e2e/framework/framework.go:181 +0x8b k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc000ad8dc0) test/e2e/framework/framework.go:435 +0x21d �[1mSTEP:�[0m Destroying namespace "container-probe-7154" for this suite. �[38;5;243m01/17/23 22:32:12.467�[0m Jan 17 22:32:12.585: FAIL: Couldn't delete ns: "container-probe-7154": Delete "https://api.e2e-e2e-kops-grid-kubenet-flatcar-k25-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/container-probe-7154": dial tcp 54.78.31.51:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://api.e2e-e2e-kops-grid-kubenet-flatcar-k25-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/container-probe-7154", Err:(*net.OpError)(0xc001545ef0)}) Full Stack Trace panic({0x6ea2520, 0xc000a13880}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1() test/e2e/framework/ginkgowrapper/wrapper.go:73 +0x7d panic({0x6ea4740, 0xc0006397a0}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail({0xc0036f06c0, 0x106}, {0xc003663048?, 0x735bfcc?, 0xc003663068?}) test/e2e/framework/ginkgowrapper/wrapper.go:77 +0x197 k8s.io/kubernetes/test/e2e/framework.Fail({0xc002d30100, 0xf1}, {0xc0036630e0?, 0xc0007fb980?, 0xc003663108?}) test/e2e/framework/log.go:63 +0x145 k8s.io/kubernetes/test/e2e/framework.ExpectNoErrorWithOffset(0x1, {0x7c34da0, 0xc00160c2d0}, {0xc002d9d3e0?, 0x0?, 0x0?}) test/e2e/framework/expect.go:76 +0x267 k8s.io/kubernetes/test/e2e/framework.ExpectNoError(...) test/e2e/framework/expect.go:43 k8s.io/kubernetes/test/e2e/framework.dumpEventsInNamespace(0xc003663590, {0xc000af6cd8, 0x14}) test/e2e/framework/util.go:901 +0x191 k8s.io/kubernetes/test/e2e/framework.DumpAllNamespaceInfo({0x7ca2818, 0xc000a1e180}, {0xc000af6cd8, 0x14}) test/e2e/framework/util.go:919 +0x8d k8s.io/kubernetes/test/e2e/framework.NewFramework.func1(0xc000ad8dc0, 0x1?) test/e2e/framework/framework.go:181 +0x8b k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc000ad8dc0) test/e2e/framework/framework.go:435 +0x21d
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-storage\]\sIn\-tree\sVolumes\s\[Driver\:\shostPathSymlink\]\s\[Testpattern\:\sInline\-volume\s\(default\sfs\)\]\ssubPath\sshould\ssupport\sexisting\sdirectory$'
test/e2e/storage/drivers/in_tree.go:955 k8s.io/kubernetes/test/e2e/storage/drivers.(*hostPathSymlinkDriver).CreateVolume(0xc000d1f380?, 0xc0039f45a0, {0xc0010e7ad0?, 0xc0039aa9e0?}) test/e2e/storage/drivers/in_tree.go:955 +0x9e5 k8s.io/kubernetes/test/e2e/storage/framework.CreateVolume({0x7c4e488, 0xc0010e7ad0}, 0xc000185b00?, {0x7373dcf, 0xc}) test/e2e/storage/framework/driver_operations.go:43 +0xd2 k8s.io/kubernetes/test/e2e/storage/framework.CreateVolumeResource({0x7c4e488, 0xc0010e7ad0}, 0xc0039f45a0, {{0x73c7473, 0x1a}, {0x0, 0x0}, {0x7373dcf, 0xc}, {0x0, ...}, ...}, ...) test/e2e/storage/framework/volume_resource.go:65 +0x225 k8s.io/kubernetes/test/e2e/storage/testsuites.(*subPathTestSuite).DefineTests.func1() test/e2e/storage/testsuites/subpath.go:128 +0x28e k8s.io/kubernetes/test/e2e/storage/testsuites.(*subPathTestSuite).DefineTests.func4() test/e2e/storage/testsuites/subpath.go:207 +0x4dfrom junit_01.xml
{"msg":"FAILED [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support existing directory","completed":2,"skipped":18,"failed":1,"failures":["[sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support existing directory"]} [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Inline-volume (default fs)] subPath test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/17/23 22:31:39.539�[0m Jan 17 22:31:39.540: INFO: >>> kubeConfig: /root/.kube/config �[1mSTEP:�[0m Building a namespace api object, basename provisioning �[38;5;243m01/17/23 22:31:39.541�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/17/23 22:31:39.867�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/17/23 22:31:40.081�[0m [It] should support existing directory test/e2e/storage/testsuites/subpath.go:206 Jan 17 22:31:40.296: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics Jan 17 22:31:40.525: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-2558" in namespace "provisioning-2558" to be "Succeeded or Failed" Jan 17 22:31:40.637: INFO: Pod "hostpath-symlink-prep-provisioning-2558": Phase="Pending", Reason="", readiness=false. Elapsed: 111.488104ms Jan 17 22:31:42.751: INFO: Pod "hostpath-symlink-prep-provisioning-2558": Phase="Pending", Reason="", readiness=false. Elapsed: 2.225818944s Jan 17 22:31:44.745: INFO: Pod "hostpath-symlink-prep-provisioning-2558": Phase="Pending", Reason="", readiness=false. Elapsed: 4.219905706s Jan 17 22:31:46.750: INFO: Pod "hostpath-symlink-prep-provisioning-2558": Phase="Pending", Reason="", readiness=false. Elapsed: 6.224523805s Jan 17 22:31:48.744: INFO: Pod "hostpath-symlink-prep-provisioning-2558": Phase="Pending", Reason="", readiness=false. Elapsed: 8.219420989s Jan 17 22:31:50.748: INFO: Pod "hostpath-symlink-prep-provisioning-2558": Phase="Pending", Reason="", readiness=false. Elapsed: 10.222814468s Jan 17 22:32:12.365: INFO: Encountered non-retryable error while getting pod provisioning-2558/hostpath-symlink-prep-provisioning-2558: Get "https://api.e2e-e2e-kops-grid-kubenet-flatcar-k25-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/provisioning-2558/pods/hostpath-symlink-prep-provisioning-2558": dial tcp 54.78.31.51:443: connect: connection refused - error from a previous attempt: unexpected EOF Jan 17 22:32:12.365: INFO: Unexpected error: while waiting for hostPath init pod to succeed: <*fmt.wrapError | 0xc002bfe700>: { msg: "error while waiting for pod provisioning-2558/hostpath-symlink-prep-provisioning-2558 to be Succeeded or Failed: Get \"https://api.e2e-e2e-kops-grid-kubenet-flatcar-k25-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/provisioning-2558/pods/hostpath-symlink-prep-provisioning-2558\": dial tcp 54.78.31.51:443: connect: connection refused - error from a previous attempt: unexpected EOF", err: <*rest.wrapPreviousError | 0xc002bfe6e0>{ currentErr: <*url.Error | 0xc003ab2a80>{ Op: "Get", URL: "https://api.e2e-e2e-kops-grid-kubenet-flatcar-k25-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/provisioning-2558/pods/hostpath-symlink-prep-provisioning-2558", Err: <*net.OpError | 0xc003a8a190>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0037c9a40>{IP: [54, 78, 31, 51], Port: 443, Zone: ""}, Err: <*os.SyscallError | 0xc002bfe6a0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, }, previousError: <*errors.errorString | 0xc0000c6130>{s: "unexpected EOF"}, }, } Jan 17 22:32:12.366: FAIL: while waiting for hostPath init pod to succeed: error while waiting for pod provisioning-2558/hostpath-symlink-prep-provisioning-2558 to be Succeeded or Failed: Get "https://api.e2e-e2e-kops-grid-kubenet-flatcar-k25-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/provisioning-2558/pods/hostpath-symlink-prep-provisioning-2558": dial tcp 54.78.31.51:443: connect: connection refused - error from a previous attempt: unexpected EOF Full Stack Trace k8s.io/kubernetes/test/e2e/storage/drivers.(*hostPathSymlinkDriver).CreateVolume(0xc000d1f380?, 0xc0039f45a0, {0xc0010e7ad0?, 0xc0039aa9e0?}) test/e2e/storage/drivers/in_tree.go:955 +0x9e5 k8s.io/kubernetes/test/e2e/storage/framework.CreateVolume({0x7c4e488, 0xc0010e7ad0}, 0xc000185b00?, {0x7373dcf, 0xc}) test/e2e/storage/framework/driver_operations.go:43 +0xd2 k8s.io/kubernetes/test/e2e/storage/framework.CreateVolumeResource({0x7c4e488, 0xc0010e7ad0}, 0xc0039f45a0, {{0x73c7473, 0x1a}, {0x0, 0x0}, {0x7373dcf, 0xc}, {0x0, ...}, ...}, ...) test/e2e/storage/framework/volume_resource.go:65 +0x225 k8s.io/kubernetes/test/e2e/storage/testsuites.(*subPathTestSuite).DefineTests.func1() test/e2e/storage/testsuites/subpath.go:128 +0x28e k8s.io/kubernetes/test/e2e/storage/testsuites.(*subPathTestSuite).DefineTests.func4() test/e2e/storage/testsuites/subpath.go:207 +0x4d [AfterEach] [Testpattern: Inline-volume (default fs)] subPath test/e2e/framework/framework.go:187 �[1mSTEP:�[0m Collecting events from namespace "provisioning-2558". �[38;5;243m01/17/23 22:32:12.366�[0m Jan 17 22:32:12.481: INFO: Unexpected error: failed to list events in namespace "provisioning-2558": <*url.Error | 0xc002c19e60>: { Op: "Get", URL: "https://api.e2e-e2e-kops-grid-kubenet-flatcar-k25-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/provisioning-2558/events", Err: <*net.OpError | 0xc0037bfb80>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc003ab3290>{IP: [54, 78, 31, 51], Port: 443, Zone: ""}, Err: <*os.SyscallError | 0xc0037ccce0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Jan 17 22:32:12.481: FAIL: failed to list events in namespace "provisioning-2558": Get "https://api.e2e-e2e-kops-grid-kubenet-flatcar-k25-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/provisioning-2558/events": dial tcp 54.78.31.51:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework.dumpEventsInNamespace(0xc003b55590, {0xc000611038, 0x11}) test/e2e/framework/util.go:901 +0x191 k8s.io/kubernetes/test/e2e/framework.DumpAllNamespaceInfo({0x7ca2818, 0xc00398f980}, {0xc000611038, 0x11}) test/e2e/framework/util.go:919 +0x8d k8s.io/kubernetes/test/e2e/framework.NewFramework.func1(0xc001114000, 0x3?) test/e2e/framework/framework.go:181 +0x8b k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc001114000) test/e2e/framework/framework.go:435 +0x21d �[1mSTEP:�[0m Destroying namespace "provisioning-2558" for this suite. �[38;5;243m01/17/23 22:32:12.481�[0m Jan 17 22:32:12.599: FAIL: Couldn't delete ns: "provisioning-2558": Delete "https://api.e2e-e2e-kops-grid-kubenet-flatcar-k25-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/provisioning-2558": dial tcp 54.78.31.51:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://api.e2e-e2e-kops-grid-kubenet-flatcar-k25-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/provisioning-2558", Err:(*net.OpError)(0xc0037bff90)}) Full Stack Trace panic({0x6ea2520, 0xc000460340}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1() test/e2e/framework/ginkgowrapper/wrapper.go:73 +0x7d panic({0x6ea4740, 0xc0004e2af0}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail({0xc00313ba00, 0x100}, {0xc003b55048?, 0x735bfcc?, 0xc003b55068?}) test/e2e/framework/ginkgowrapper/wrapper.go:77 +0x197 k8s.io/kubernetes/test/e2e/framework.Fail({0xc002bfc0f0, 0xeb}, {0xc003b550e0?, 0xc001087e00?, 0xc003b55108?}) test/e2e/framework/log.go:63 +0x145 k8s.io/kubernetes/test/e2e/framework.ExpectNoErrorWithOffset(0x1, {0x7c34da0, 0xc002c19e60}, {0xc0037ccd20?, 0x0?, 0x0?}) test/e2e/framework/expect.go:76 +0x267 k8s.io/kubernetes/test/e2e/framework.ExpectNoError(...) test/e2e/framework/expect.go:43 k8s.io/kubernetes/test/e2e/framework.dumpEventsInNamespace(0xc003b55590, {0xc000611038, 0x11}) test/e2e/framework/util.go:901 +0x191 k8s.io/kubernetes/test/e2e/framework.DumpAllNamespaceInfo({0x7ca2818, 0xc00398f980}, {0xc000611038, 0x11}) test/e2e/framework/util.go:919 +0x8d k8s.io/kubernetes/test/e2e/framework.NewFramework.func1(0xc001114000, 0x3?) test/e2e/framework/framework.go:181 +0x8b k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc001114000) test/e2e/framework/framework.go:435 +0x21d
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-storage\]\sIn\-tree\sVolumes\s\[Driver\:\slocal\]\[LocalVolumeType\:\sblock\]\s\[Testpattern\:\sPre\-provisioned\sPV\s\(default\sfs\)\]\svolumes\sshould\sallow\sexec\sof\sfiles\son\sthe\svolume$'
test/e2e/storage/utils/local.go:141 k8s.io/kubernetes/test/e2e/storage/utils.(*ltrMgr).findLoopDevice(0xc0028bad20, {0xc0027ca280?, 0x0?}, 0x0?) test/e2e/storage/utils/local.go:141 +0xb0 k8s.io/kubernetes/test/e2e/storage/utils.(*ltrMgr).teardownLoopDevice(0xc0028bad20, {0xc0027ca280, 0x36}, 0xc0008b3d80) test/e2e/storage/utils/local.go:158 +0x4b k8s.io/kubernetes/test/e2e/storage/utils.(*ltrMgr).cleanupLocalVolumeBlock(0xc0028bad20, 0xc001deeb40) test/e2e/storage/utils/local.go:167 +0x36 k8s.io/kubernetes/test/e2e/storage/utils.(*ltrMgr).Remove(0x0?, 0xc0033eb010?) test/e2e/storage/utils/local.go:351 +0x69 k8s.io/kubernetes/test/e2e/storage/drivers.(*localVolume).DeleteVolume(0x37?) test/e2e/storage/drivers/in_tree.go:1953 +0x28 k8s.io/kubernetes/test/e2e/storage/utils.TryFunc(0x7ca2818?) test/e2e/storage/utils/utils.go:714 +0x6d k8s.io/kubernetes/test/e2e/storage/framework.(*VolumeResource).CleanupResource(0xc0016d23c0) test/e2e/storage/framework/volume_resource.go:231 +0xc89 k8s.io/kubernetes/test/e2e/storage/testsuites.(*volumesTestSuite).DefineTests.func2() test/e2e/storage/testsuites/volumes.go:151 +0x4b k8s.io/kubernetes/test/e2e/storage/testsuites.(*volumesTestSuite).DefineTests.func4() test/e2e/storage/testsuites/volumes.go:204 +0xc2from junit_01.xml
{"msg":"FAILED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","completed":3,"skipped":48,"failed":1,"failures":["[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume"]} [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/17/23 22:31:28.834�[0m Jan 17 22:31:28.834: INFO: >>> kubeConfig: /root/.kube/config �[1mSTEP:�[0m Building a namespace api object, basename volume �[38;5;243m01/17/23 22:31:28.835�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/17/23 22:31:29.158�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/17/23 22:31:29.37�[0m [It] should allow exec of files on the volume test/e2e/storage/testsuites/volumes.go:198 Jan 17 22:31:29.691: INFO: In-tree plugin kubernetes.io/local-volume is not migrated, not validating any metrics �[1mSTEP:�[0m Creating block device on node "i-05a4ff7b848c70e4e" using path "/tmp/local-driver-79e359e0-1bb1-49d1-8ae2-f8a23c5b2486" �[38;5;243m01/17/23 22:31:29.691�[0m Jan 17 22:31:29.806: INFO: Waiting up to 5m0s for pod "hostexec-i-05a4ff7b848c70e4e-w888m" in namespace "volume-8642" to be "running" Jan 17 22:31:29.916: INFO: Pod "hostexec-i-05a4ff7b848c70e4e-w888m": Phase="Pending", Reason="", readiness=false. Elapsed: 110.145976ms Jan 17 22:31:32.023: INFO: Pod "hostexec-i-05a4ff7b848c70e4e-w888m": Phase="Pending", Reason="", readiness=false. Elapsed: 2.217206771s Jan 17 22:31:34.025: INFO: Pod "hostexec-i-05a4ff7b848c70e4e-w888m": Phase="Pending", Reason="", readiness=false. Elapsed: 4.218498256s Jan 17 22:31:36.026: INFO: Pod "hostexec-i-05a4ff7b848c70e4e-w888m": Phase="Pending", Reason="", readiness=false. Elapsed: 6.220069341s Jan 17 22:31:38.024: INFO: Pod "hostexec-i-05a4ff7b848c70e4e-w888m": Phase="Pending", Reason="", readiness=false. Elapsed: 8.217560499s Jan 17 22:31:40.024: INFO: Pod "hostexec-i-05a4ff7b848c70e4e-w888m": Phase="Pending", Reason="", readiness=false. Elapsed: 10.217870819s Jan 17 22:31:42.024: INFO: Pod "hostexec-i-05a4ff7b848c70e4e-w888m": Phase="Pending", Reason="", readiness=false. Elapsed: 12.217934271s Jan 17 22:31:44.024: INFO: Pod "hostexec-i-05a4ff7b848c70e4e-w888m": Phase="Pending", Reason="", readiness=false. Elapsed: 14.217782851s Jan 17 22:31:46.024: INFO: Pod "hostexec-i-05a4ff7b848c70e4e-w888m": Phase="Pending", Reason="", readiness=false. Elapsed: 16.217595705s Jan 17 22:31:48.023: INFO: Pod "hostexec-i-05a4ff7b848c70e4e-w888m": Phase="Running", Reason="", readiness=true. Elapsed: 18.217196343s Jan 17 22:31:48.023: INFO: Pod "hostexec-i-05a4ff7b848c70e4e-w888m" satisfied condition "running" Jan 17 22:31:48.023: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-driver-79e359e0-1bb1-49d1-8ae2-f8a23c5b2486 && dd if=/dev/zero of=/tmp/local-driver-79e359e0-1bb1-49d1-8ae2-f8a23c5b2486/file bs=4096 count=5120 && losetup -f /tmp/local-driver-79e359e0-1bb1-49d1-8ae2-f8a23c5b2486/file] Namespace:volume-8642 PodName:hostexec-i-05a4ff7b848c70e4e-w888m ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jan 17 22:31:48.023: INFO: >>> kubeConfig: /root/.kube/config Jan 17 22:31:48.024: INFO: ExecWithOptions: Clientset creation Jan 17 22:31:48.024: INFO: ExecWithOptions: execute(POST https://api.e2e-e2e-kops-grid-kubenet-flatcar-k25-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/volume-8642/pods/hostexec-i-05a4ff7b848c70e4e-w888m/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=mkdir+-p+%2Ftmp%2Flocal-driver-79e359e0-1bb1-49d1-8ae2-f8a23c5b2486+%26%26+dd+if%3D%2Fdev%2Fzero+of%3D%2Ftmp%2Flocal-driver-79e359e0-1bb1-49d1-8ae2-f8a23c5b2486%2Ffile+bs%3D4096+count%3D5120+%26%26+losetup+-f+%2Ftmp%2Flocal-driver-79e359e0-1bb1-49d1-8ae2-f8a23c5b2486%2Ffile&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) Jan 17 22:31:48.823: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-driver-79e359e0-1bb1-49d1-8ae2-f8a23c5b2486/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:volume-8642 PodName:hostexec-i-05a4ff7b848c70e4e-w888m ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jan 17 22:31:48.823: INFO: >>> kubeConfig: /root/.kube/config Jan 17 22:31:48.823: INFO: ExecWithOptions: Clientset creation Jan 17 22:31:48.824: INFO: ExecWithOptions: execute(POST https://api.e2e-e2e-kops-grid-kubenet-flatcar-k25-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/volume-8642/pods/hostexec-i-05a4ff7b848c70e4e-w888m/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=E2E_LOOP_DEV%3D%24%28losetup+%7C+grep+%2Ftmp%2Flocal-driver-79e359e0-1bb1-49d1-8ae2-f8a23c5b2486%2Ffile+%7C+awk+%27%7B+print+%241+%7D%27%29+2%3E%261+%3E+%2Fdev%2Fnull+%26%26+echo+%24%7BE2E_LOOP_DEV%7D&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) Jan 17 22:31:49.565: INFO: Creating resource for pre-provisioned PV Jan 17 22:31:49.565: INFO: Creating PVC and PV �[1mSTEP:�[0m Creating a PVC followed by a PV �[38;5;243m01/17/23 22:31:49.565�[0m Jan 17 22:31:49.781: INFO: Waiting for PV local-wfpq4 to bind to PVC pvc-x5r4w Jan 17 22:31:49.781: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-x5r4w] to have phase Bound Jan 17 22:31:49.888: INFO: PersistentVolumeClaim pvc-x5r4w found but phase is Pending instead of Bound. Jan 17 22:32:12.346: INFO: Failed to get claim "pvc-x5r4w", retrying in 2s. Error: Get "https://api.e2e-e2e-kops-grid-kubenet-flatcar-k25-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/volume-8642/persistentvolumeclaims/pvc-x5r4w": dial tcp 54.78.31.51:443: connect: connection refused - error from a previous attempt: unexpected EOF Jan 17 22:32:29.951: INFO: Failed to get claim "pvc-x5r4w", retrying in 2s. Error: Get "https://api.e2e-e2e-kops-grid-kubenet-flatcar-k25-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/volume-8642/persistentvolumeclaims/pvc-x5r4w": dial tcp 54.78.31.51:443: connect: connection refused Jan 17 22:32:32.069: INFO: Failed to get claim "pvc-x5r4w", retrying in 2s. Error: Get "https://api.e2e-e2e-kops-grid-kubenet-flatcar-k25-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/volume-8642/persistentvolumeclaims/pvc-x5r4w": dial tcp 54.78.31.51:443: connect: connection refused Jan 17 22:32:38.342: INFO: PersistentVolumeClaim pvc-x5r4w found but phase is Pending instead of Bound. Jan 17 22:32:40.451: INFO: PersistentVolumeClaim pvc-x5r4w found but phase is Pending instead of Bound. Jan 17 22:32:42.558: INFO: PersistentVolumeClaim pvc-x5r4w found but phase is Pending instead of Bound. Jan 17 22:32:44.663: INFO: PersistentVolumeClaim pvc-x5r4w found but phase is Pending instead of Bound. Jan 17 22:32:46.770: INFO: PersistentVolumeClaim pvc-x5r4w found but phase is Pending instead of Bound. Jan 17 22:32:48.876: INFO: PersistentVolumeClaim pvc-x5r4w found but phase is Pending instead of Bound. Jan 17 22:32:50.983: INFO: PersistentVolumeClaim pvc-x5r4w found but phase is Pending instead of Bound. Jan 17 22:32:53.089: INFO: PersistentVolumeClaim pvc-x5r4w found but phase is Pending instead of Bound. Jan 17 22:32:55.196: INFO: PersistentVolumeClaim pvc-x5r4w found but phase is Pending instead of Bound. Jan 17 22:32:57.302: INFO: PersistentVolumeClaim pvc-x5r4w found but phase is Pending instead of Bound. Jan 17 22:32:59.408: INFO: PersistentVolumeClaim pvc-x5r4w found but phase is Pending instead of Bound. Jan 17 22:33:01.513: INFO: PersistentVolumeClaim pvc-x5r4w found but phase is Pending instead of Bound. Jan 17 22:33:03.620: INFO: PersistentVolumeClaim pvc-x5r4w found but phase is Pending instead of Bound. Jan 17 22:33:05.727: INFO: PersistentVolumeClaim pvc-x5r4w found but phase is Pending instead of Bound. Jan 17 22:33:07.833: INFO: PersistentVolumeClaim pvc-x5r4w found but phase is Pending instead of Bound. Jan 17 22:33:09.938: INFO: PersistentVolumeClaim pvc-x5r4w found but phase is Pending instead of Bound. Jan 17 22:33:12.044: INFO: PersistentVolumeClaim pvc-x5r4w found but phase is Pending instead of Bound. Jan 17 22:33:14.151: INFO: PersistentVolumeClaim pvc-x5r4w found but phase is Pending instead of Bound. Jan 17 22:33:16.258: INFO: PersistentVolumeClaim pvc-x5r4w found but phase is Pending instead of Bound. Jan 17 22:33:18.364: INFO: PersistentVolumeClaim pvc-x5r4w found but phase is Pending instead of Bound. Jan 17 22:33:20.470: INFO: PersistentVolumeClaim pvc-x5r4w found but phase is Pending instead of Bound. Jan 17 22:33:22.577: INFO: PersistentVolumeClaim pvc-x5r4w found but phase is Pending instead of Bound. Jan 17 22:33:24.683: INFO: PersistentVolumeClaim pvc-x5r4w found but phase is Pending instead of Bound. Jan 17 22:33:26.790: INFO: PersistentVolumeClaim pvc-x5r4w found but phase is Pending instead of Bound. Jan 17 22:33:28.895: INFO: PersistentVolumeClaim pvc-x5r4w found and phase=Bound (1m39.114410876s) Jan 17 22:33:28.895: INFO: Waiting up to 3m0s for PersistentVolume local-wfpq4 to have phase Bound Jan 17 22:33:29.000: INFO: PersistentVolume local-wfpq4 found and phase=Bound (105.023838ms) �[1mSTEP:�[0m Creating pod exec-volume-test-preprovisionedpv-vjgm �[38;5;243m01/17/23 22:33:29.211�[0m �[1mSTEP:�[0m Creating a pod to test exec-volume-test �[38;5;243m01/17/23 22:33:29.211�[0m Jan 17 22:33:29.323: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-vjgm" in namespace "volume-8642" to be "Succeeded or Failed" Jan 17 22:33:29.428: INFO: Pod "exec-volume-test-preprovisionedpv-vjgm": Phase="Pending", Reason="", readiness=false. Elapsed: 105.542965ms Jan 17 22:33:31.544: INFO: Pod "exec-volume-test-preprovisionedpv-vjgm": Phase="Pending", Reason="", readiness=false. Elapsed: 2.221857667s Jan 17 22:33:33.533: INFO: Pod "exec-volume-test-preprovisionedpv-vjgm": Phase="Pending", Reason="", readiness=false. Elapsed: 4.21088953s Jan 17 22:33:35.534: INFO: Pod "exec-volume-test-preprovisionedpv-vjgm": Phase="Pending", Reason="", readiness=false. Elapsed: 6.211563983s Jan 17 22:33:37.538: INFO: Pod "exec-volume-test-preprovisionedpv-vjgm": Phase="Pending", Reason="", readiness=false. Elapsed: 8.215590496s Jan 17 22:33:39.534: INFO: Pod "exec-volume-test-preprovisionedpv-vjgm": Phase="Pending", Reason="", readiness=false. Elapsed: 10.211865323s Jan 17 22:33:41.534: INFO: Pod "exec-volume-test-preprovisionedpv-vjgm": Phase="Pending", Reason="", readiness=false. Elapsed: 12.211416208s Jan 17 22:33:43.534: INFO: Pod "exec-volume-test-preprovisionedpv-vjgm": Phase="Pending", Reason="", readiness=false. Elapsed: 14.211787971s Jan 17 22:33:45.534: INFO: Pod "exec-volume-test-preprovisionedpv-vjgm": Phase="Pending", Reason="", readiness=false. Elapsed: 16.211399847s Jan 17 22:33:47.534: INFO: Pod "exec-volume-test-preprovisionedpv-vjgm": Phase="Pending", Reason="", readiness=false. Elapsed: 18.210914419s Jan 17 22:33:49.534: INFO: Pod "exec-volume-test-preprovisionedpv-vjgm": Phase="Pending", Reason="", readiness=false. Elapsed: 20.21097431s Jan 17 22:33:51.534: INFO: Pod "exec-volume-test-preprovisionedpv-vjgm": Phase="Pending", Reason="", readiness=false. Elapsed: 22.21127957s Jan 17 22:33:53.534: INFO: Pod "exec-volume-test-preprovisionedpv-vjgm": Phase="Pending", Reason="", readiness=false. Elapsed: 24.211091831s Jan 17 22:33:55.534: INFO: Pod "exec-volume-test-preprovisionedpv-vjgm": Phase="Pending", Reason="", readiness=false. Elapsed: 26.210943738s Jan 17 22:33:57.534: INFO: Pod "exec-volume-test-preprovisionedpv-vjgm": Phase="Pending", Reason="", readiness=false. Elapsed: 28.210909923s Jan 17 22:33:59.534: INFO: Pod "exec-volume-test-preprovisionedpv-vjgm": Phase="Pending", Reason="", readiness=false. Elapsed: 30.211491982s Jan 17 22:34:01.534: INFO: Pod "exec-volume-test-preprovisionedpv-vjgm": Phase="Pending", Reason="", readiness=false. Elapsed: 32.211028727s Jan 17 22:34:03.549: INFO: Pod "exec-volume-test-preprovisionedpv-vjgm": Phase="Pending", Reason="", readiness=false. Elapsed: 34.226071844s Jan 17 22:34:05.534: INFO: Pod "exec-volume-test-preprovisionedpv-vjgm": Phase="Pending", Reason="", readiness=false. Elapsed: 36.211113808s Jan 17 22:34:07.534: INFO: Pod "exec-volume-test-preprovisionedpv-vjgm": Phase="Pending", Reason="", readiness=false. Elapsed: 38.211712762s Jan 17 22:34:09.534: INFO: Pod "exec-volume-test-preprovisionedpv-vjgm": Phase="Pending", Reason="", readiness=false. Elapsed: 40.211700573s Jan 17 22:34:11.534: INFO: Pod "exec-volume-test-preprovisionedpv-vjgm": Phase="Pending", Reason="", readiness=false. Elapsed: 42.211192979s Jan 17 22:34:13.534: INFO: Pod "exec-volume-test-preprovisionedpv-vjgm": Phase="Pending", Reason="", readiness=false. Elapsed: 44.211687265s Jan 17 22:34:15.534: INFO: Pod "exec-volume-test-preprovisionedpv-vjgm": Phase="Pending", Reason="", readiness=false. Elapsed: 46.211224395s Jan 17 22:34:17.534: INFO: Pod "exec-volume-test-preprovisionedpv-vjgm": Phase="Pending", Reason="", readiness=false. Elapsed: 48.211089482s Jan 17 22:34:19.534: INFO: Pod "exec-volume-test-preprovisionedpv-vjgm": Phase="Pending", Reason="", readiness=false. Elapsed: 50.211787469s Jan 17 22:34:21.534: INFO: Pod "exec-volume-test-preprovisionedpv-vjgm": Phase="Pending", Reason="", readiness=false. Elapsed: 52.211329132s Jan 17 22:34:23.534: INFO: Pod "exec-volume-test-preprovisionedpv-vjgm": Phase="Pending", Reason="", readiness=false. Elapsed: 54.211363798s Jan 17 22:34:25.537: INFO: Pod "exec-volume-test-preprovisionedpv-vjgm": Phase="Pending", Reason="", readiness=false. Elapsed: 56.213934406s Jan 17 22:34:27.534: INFO: Pod "exec-volume-test-preprovisionedpv-vjgm": Phase="Pending", Reason="", readiness=false. Elapsed: 58.211059708s Jan 17 22:34:29.533: INFO: Pod "exec-volume-test-preprovisionedpv-vjgm": Phase="Pending", Reason="", readiness=false. Elapsed: 1m0.210682229s Jan 17 22:34:31.534: INFO: Pod "exec-volume-test-preprovisionedpv-vjgm": Phase="Pending", Reason="", readiness=false. Elapsed: 1m2.211032347s Jan 17 22:34:33.534: INFO: Pod "exec-volume-test-preprovisionedpv-vjgm": Phase="Pending", Reason="", readiness=false. Elapsed: 1m4.210907547s Jan 17 22:34:35.534: INFO: Pod "exec-volume-test-preprovisionedpv-vjgm": Phase="Pending", Reason="", readiness=false. Elapsed: 1m6.211315539s Jan 17 22:34:37.534: INFO: Pod "exec-volume-test-preprovisionedpv-vjgm": Phase="Pending", Reason="", readiness=false. Elapsed: 1m8.210958713s Jan 17 22:34:39.534: INFO: Pod "exec-volume-test-preprovisionedpv-vjgm": Phase="Pending", Reason="", readiness=false. Elapsed: 1m10.211288302s Jan 17 22:34:41.536: INFO: Pod "exec-volume-test-preprovisionedpv-vjgm": Phase="Pending", Reason="", readiness=false. Elapsed: 1m12.213594468s Jan 17 22:34:43.539: INFO: Pod "exec-volume-test-preprovisionedpv-vjgm": Phase="Pending", Reason="", readiness=false. Elapsed: 1m14.216291472s Jan 17 22:34:45.535: INFO: Pod "exec-volume-test-preprovisionedpv-vjgm": Phase="Pending", Reason="", readiness=false. Elapsed: 1m16.212385236s Jan 17 22:34:47.533: INFO: Pod "exec-volume-test-preprovisionedpv-vjgm": Phase="Pending", Reason="", readiness=false. Elapsed: 1m18.210764138s Jan 17 22:34:49.534: INFO: Pod "exec-volume-test-preprovisionedpv-vjgm": Phase="Pending", Reason="", readiness=false. Elapsed: 1m20.211463911s Jan 17 22:34:51.534: INFO: Pod "exec-volume-test-preprovisionedpv-vjgm": Phase="Pending", Reason="", readiness=false. Elapsed: 1m22.210970863s Jan 17 22:34:53.534: INFO: Pod "exec-volume-test-preprovisionedpv-vjgm": Phase="Pending", Reason="", readiness=false. Elapsed: 1m24.211428989s Jan 17 22:34:55.534: INFO: Pod "exec-volume-test-preprovisionedpv-vjgm": Phase="Pending", Reason="", readiness=false. Elapsed: 1m26.211521948s Jan 17 22:34:57.534: INFO: Pod "exec-volume-test-preprovisionedpv-vjgm": Phase="Pending", Reason="", readiness=false. Elapsed: 1m28.21094738s Jan 17 22:34:59.544: INFO: Pod "exec-volume-test-preprovisionedpv-vjgm": Phase="Pending", Reason="", readiness=false. Elapsed: 1m30.220998887s Jan 17 22:35:01.534: INFO: Pod "exec-volume-test-preprovisionedpv-vjgm": Phase="Pending", Reason="", readiness=false. Elapsed: 1m32.211229459s Jan 17 22:35:03.534: INFO: Pod "exec-volume-test-preprovisionedpv-vjgm": Phase="Pending", Reason="", readiness=false. Elapsed: 1m34.211837943s Jan 17 22:35:05.533: INFO: Pod "exec-volume-test-preprovisionedpv-vjgm": Phase="Pending", Reason="", readiness=false. Elapsed: 1m36.210871822s Jan 17 22:35:07.540: INFO: Pod "exec-volume-test-preprovisionedpv-vjgm": Phase="Pending", Reason="", readiness=false. Elapsed: 1m38.217273344s Jan 17 22:35:09.534: INFO: Pod "exec-volume-test-preprovisionedpv-vjgm": Phase="Pending", Reason="", readiness=false. Elapsed: 1m40.211553424s Jan 17 22:35:11.535: INFO: Pod "exec-volume-test-preprovisionedpv-vjgm": Phase="Pending", Reason="", readiness=false. Elapsed: 1m42.212179155s Jan 17 22:35:13.534: INFO: Pod "exec-volume-test-preprovisionedpv-vjgm": Phase="Succeeded", Reason="", readiness=false. Elapsed: 1m44.210959292s �[1mSTEP:�[0m Saw pod success �[38;5;243m01/17/23 22:35:13.534�[0m Jan 17 22:35:13.534: INFO: Pod "exec-volume-test-preprovisionedpv-vjgm" satisfied condition "Succeeded or Failed" Jan 17 22:35:13.639: INFO: Trying to get logs from node i-05a4ff7b848c70e4e pod exec-volume-test-preprovisionedpv-vjgm container exec-container-preprovisionedpv-vjgm: <nil> �[1mSTEP:�[0m delete the pod �[38;5;243m01/17/23 22:35:13.754�[0m Jan 17 22:35:13.866: INFO: Waiting for pod exec-volume-test-preprovisionedpv-vjgm to disappear Jan 17 22:35:13.971: INFO: Pod exec-volume-test-preprovisionedpv-vjgm no longer exists �[1mSTEP:�[0m Deleting pod exec-volume-test-preprovisionedpv-vjgm �[38;5;243m01/17/23 22:35:13.971�[0m Jan 17 22:35:13.971: INFO: Deleting pod "exec-volume-test-preprovisionedpv-vjgm" in namespace "volume-8642" �[1mSTEP:�[0m Deleting pv and pvc �[38;5;243m01/17/23 22:35:14.076�[0m Jan 17 22:35:14.076: INFO: Deleting PersistentVolumeClaim "pvc-x5r4w" Jan 17 22:35:14.183: INFO: Deleting PersistentVolume "local-wfpq4" Jan 17 22:35:14.291: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-driver-79e359e0-1bb1-49d1-8ae2-f8a23c5b2486/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:volume-8642 PodName:hostexec-i-05a4ff7b848c70e4e-w888m ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jan 17 22:35:14.291: INFO: >>> kubeConfig: /root/.kube/config Jan 17 22:35:14.292: INFO: ExecWithOptions: Clientset creation Jan 17 22:35:14.292: INFO: ExecWithOptions: execute(POST https://api.e2e-e2e-kops-grid-kubenet-flatcar-k25-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/volume-8642/pods/hostexec-i-05a4ff7b848c70e4e-w888m/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=E2E_LOOP_DEV%3D%24%28losetup+%7C+grep+%2Ftmp%2Flocal-driver-79e359e0-1bb1-49d1-8ae2-f8a23c5b2486%2Ffile+%7C+awk+%27%7B+print+%241+%7D%27%29+2%3E%261+%3E+%2Fdev%2Fnull+%26%26+echo+%24%7BE2E_LOOP_DEV%7D&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) Jan 17 22:35:14.639: INFO: exec i-05a4ff7b848c70e4e: command: E2E_LOOP_DEV=$(losetup | grep /tmp/local-driver-79e359e0-1bb1-49d1-8ae2-f8a23c5b2486/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV} Jan 17 22:35:14.639: INFO: exec i-05a4ff7b848c70e4e: stdout: "" Jan 17 22:35:14.639: INFO: exec i-05a4ff7b848c70e4e: stderr: "" Jan 17 22:35:14.639: INFO: exec i-05a4ff7b848c70e4e: exit code: 0 Jan 17 22:35:14.639: INFO: Unexpected error: <*errors.errorString | 0xc001160b00>: { s: "unable to upgrade connection: container not found (\"agnhost-container\")", } Jan 17 22:35:14.639: FAIL: unable to upgrade connection: container not found ("agnhost-container") Full Stack Trace k8s.io/kubernetes/test/e2e/storage/utils.(*ltrMgr).findLoopDevice(0xc0028bad20, {0xc0027ca280?, 0x0?}, 0x0?) test/e2e/storage/utils/local.go:141 +0xb0 k8s.io/kubernetes/test/e2e/storage/utils.(*ltrMgr).teardownLoopDevice(0xc0028bad20, {0xc0027ca280, 0x36}, 0xc0008b3d80) test/e2e/storage/utils/local.go:158 +0x4b k8s.io/kubernetes/test/e2e/storage/utils.(*ltrMgr).cleanupLocalVolumeBlock(0xc0028bad20, 0xc001deeb40) test/e2e/storage/utils/local.go:167 +0x36 k8s.io/kubernetes/test/e2e/storage/utils.(*ltrMgr).Remove(0x0?, 0xc0033eb010?) test/e2e/storage/utils/local.go:351 +0x69 k8s.io/kubernetes/test/e2e/storage/drivers.(*localVolume).DeleteVolume(0x37?) test/e2e/storage/drivers/in_tree.go:1953 +0x28 k8s.io/kubernetes/test/e2e/storage/utils.TryFunc(0x7ca2818?) test/e2e/storage/utils/utils.go:714 +0x6d k8s.io/kubernetes/test/e2e/storage/framework.(*VolumeResource).CleanupResource(0xc0016d23c0) test/e2e/storage/framework/volume_resource.go:231 +0xc89 k8s.io/kubernetes/test/e2e/storage/testsuites.(*volumesTestSuite).DefineTests.func2() test/e2e/storage/testsuites/volumes.go:151 +0x4b k8s.io/kubernetes/test/e2e/storage/testsuites.(*volumesTestSuite).DefineTests.func4() test/e2e/storage/testsuites/volumes.go:204 +0xc2 �[1mSTEP:�[0m Deleting pod hostexec-i-05a4ff7b848c70e4e-w888m in namespace volume-8642 �[38;5;243m01/17/23 22:35:14.64�[0m [AfterEach] [Testpattern: Pre-provisioned PV (default fs)] volumes test/e2e/framework/framework.go:187 �[1mSTEP:�[0m Collecting events from namespace "volume-8642". �[38;5;243m01/17/23 22:35:14.759�[0m �[1mSTEP:�[0m Found 14 events. �[38;5;243m01/17/23 22:35:14.876�[0m Jan 17 22:35:14.876: INFO: At 2023-01-17 22:31:29 +0000 UTC - event for hostexec-i-05a4ff7b848c70e4e-w888m: {default-scheduler } Scheduled: Successfully assigned volume-8642/hostexec-i-05a4ff7b848c70e4e-w888m to i-05a4ff7b848c70e4e Jan 17 22:35:14.876: INFO: At 2023-01-17 22:31:30 +0000 UTC - event for hostexec-i-05a4ff7b848c70e4e-w888m: {kubelet i-05a4ff7b848c70e4e} Started: Started container agnhost-container Jan 17 22:35:14.876: INFO: At 2023-01-17 22:31:30 +0000 UTC - event for hostexec-i-05a4ff7b848c70e4e-w888m: {kubelet i-05a4ff7b848c70e4e} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.40" already present on machine Jan 17 22:35:14.876: INFO: At 2023-01-17 22:31:30 +0000 UTC - event for hostexec-i-05a4ff7b848c70e4e-w888m: {kubelet i-05a4ff7b848c70e4e} Created: Created container agnhost-container Jan 17 22:35:14.876: INFO: At 2023-01-17 22:31:49 +0000 UTC - event for pvc-x5r4w: {persistentvolume-controller } ProvisioningFailed: storageclass.storage.k8s.io "volume-8642" not found Jan 17 22:35:14.876: INFO: At 2023-01-17 22:33:23 +0000 UTC - event for hostexec-i-05a4ff7b848c70e4e-w888m: {kubelet i-05a4ff7b848c70e4e} Killing: Stopping container agnhost-container Jan 17 22:35:14.876: INFO: At 2023-01-17 22:33:29 +0000 UTC - event for exec-volume-test-preprovisionedpv-vjgm: {default-scheduler } FailedScheduling: 0/5 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/5 nodes are available: 1 Preemption is not helpful for scheduling, 4 No preemption victims found for incoming pod. Jan 17 22:35:14.876: INFO: At 2023-01-17 22:33:59 +0000 UTC - event for exec-volume-test-preprovisionedpv-vjgm: {default-scheduler } Scheduled: Successfully assigned volume-8642/exec-volume-test-preprovisionedpv-vjgm to i-05a4ff7b848c70e4e Jan 17 22:35:14.876: INFO: At 2023-01-17 22:34:01 +0000 UTC - event for exec-volume-test-preprovisionedpv-vjgm: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod volume-8642/exec-volume-test-preprovisionedpv-vjgm Jan 17 22:35:14.876: INFO: At 2023-01-17 22:34:01 +0000 UTC - event for hostexec-i-05a4ff7b848c70e4e-w888m: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod volume-8642/hostexec-i-05a4ff7b848c70e4e-w888m Jan 17 22:35:14.876: INFO: At 2023-01-17 22:34:04 +0000 UTC - event for exec-volume-test-preprovisionedpv-vjgm: {kubelet i-05a4ff7b848c70e4e} FailedMount: MountVolume.NewMounter initialization failed for volume "local-wfpq4" : path "/dev/loop0" does not exist Jan 17 22:35:14.876: INFO: At 2023-01-17 22:35:09 +0000 UTC - event for exec-volume-test-preprovisionedpv-vjgm: {kubelet i-05a4ff7b848c70e4e} Created: Created container exec-container-preprovisionedpv-vjgm Jan 17 22:35:14.876: INFO: At 2023-01-17 22:35:09 +0000 UTC - event for exec-volume-test-preprovisionedpv-vjgm: {kubelet i-05a4ff7b848c70e4e} Pulled: Container image "registry.k8s.io/e2e-test-images/nginx:1.14-2" already present on machine Jan 17 22:35:14.876: INFO: At 2023-01-17 22:35:10 +0000 UTC - event for exec-volume-test-preprovisionedpv-vjgm: {kubelet i-05a4ff7b848c70e4e} Started: Started container exec-container-preprovisionedpv-vjgm Jan 17 22:35:14.981: INFO: POD NODE PHASE GRACE CONDITIONS Jan 17 22:35:14.981: INFO: Jan 17 22:35:15.191: INFO: Logging node info for node i-0242e0df14fd9a246 Jan 17 22:35:15.296: INFO: Node Info: &Node{ObjectMeta:{i-0242e0df14fd9a246 0b21abc2-41c7-4385-8f7a-1e581a05d7f6 8226 0 2023-01-17 22:23:42 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:eu-west-1 failure-domain.beta.kubernetes.io/zone:eu-west-1a kubernetes.io/arch:amd64 kubernetes.io/hostname:i-0242e0df14fd9a246 kubernetes.io/os:linux node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:eu-west-1a topology.hostpath.csi/node:i-0242e0df14fd9a246 topology.kubernetes.io/region:eu-west-1 topology.kubernetes.io/zone:eu-west-1a] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-ephemeral-7169":"i-0242e0df14fd9a246","csi-hostpath-volume-expand-5471":"i-0242e0df14fd9a246","csi-mock-csi-mock-volumes-1859":"i-0242e0df14fd9a246","ebs.csi.aws.com":"i-0242e0df14fd9a246"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{aws-cloud-controller-manager Update v1 2023-01-17 22:23:42 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:beta.kubernetes.io/instance-type":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kops-controller Update v1 2023-01-17 22:23:42 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:node-role.kubernetes.io/node":{}}}} } {kubelet Update v1 2023-01-17 22:23:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}}} } {aws-cloud-controller-manager Update v1 2023-01-17 22:23:51 +0000 UTC FieldsV1 {"f:status":{"f:addresses":{"k:{\"type\":\"ExternalDNS\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"ExternalIP\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"Hostname\"}":{"f:address":{}},"k:{\"type\":\"InternalDNS\"}":{".":{},"f:address":{},"f:type":{}}},"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2023-01-17 22:24:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.4.0/24\"":{}}}} } {kube-controller-manager Update v1 2023-01-17 22:35:11 +0000 UTC FieldsV1 {"f:status":{"f:volumesAttached":{}}} status} {kubelet Update v1 2023-01-17 22:35:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:allocatable":{"f:memory":{}},"f:capacity":{"f:memory":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{},"f:kernelVersion":{},"f:osImage":{}},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.4.0/24,DoNotUseExternalID:,ProviderID:aws:///eu-west-1a/i-0242e0df14fd9a246,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{47441653760 0} {<nil>} 46329740Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4054806528 0} {<nil>} 3959772Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{42697488314 0} {<nil>} 42697488314 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3949948928 0} {<nil>} 3857372Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-17 22:23:51 +0000 UTC,LastTransitionTime:2023-01-17 22:23:51 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-17 22:35:13 +0000 UTC,LastTransitionTime:2023-01-17 22:23:21 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-17 22:35:13 +0000 UTC,LastTransitionTime:2023-01-17 22:23:21 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-17 22:35:13 +0000 UTC,LastTransitionTime:2023-01-17 22:23:21 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-17 22:35:13 +0000 UTC,LastTransitionTime:2023-01-17 22:24:55 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.55.200,},NodeAddress{Type:ExternalIP,Address:52.213.7.85,},NodeAddress{Type:InternalDNS,Address:i-0242e0df14fd9a246.eu-west-1.compute.internal,},NodeAddress{Type:Hostname,Address:i-0242e0df14fd9a246.eu-west-1.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-52-213-7-85.eu-west-1.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec2999a568d8f64a6bb6da38e830c71d,SystemUUID:ec2999a5-68d8-f64a-6bb6-da38e830c71d,BootID:59d34639-aa2c-481c-843b-ffd918a461a4,KernelVersion:5.15.86-flatcar,OSImage:Flatcar Container Linux by Kinvolk 3446.1.0 (Oklo),ContainerRuntimeVersion:containerd://1.6.10,KubeletVersion:v1.25.5,KubeProxyVersion:v1.25.5,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:11e6a66017ba4e4b938c1612b7a54a3befcefd354796c04e1dba76873a13518e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.5],SizeBytes:112030526,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c registry.k8s.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.25.5],SizeBytes:63291081,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:af7e3857d87770ddb40f5ea4f89b5a2709504ab1ee31f9ea4ab5823c045f2146 registry.k8s.io/e2e-test-images/agnhost:2.40],SizeBytes:51155161,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:20f25f275d46aa728f7615a1ccc19c78b2ed89435bf943a44b339f70f45508e6 registry.k8s.io/e2e-test-images/httpd:2.4.39-2],SizeBytes:41902010,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 registry.k8s.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:40764680,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:f0c5de192d832e7c1daa6580d4a62e8fa6fc8eabc0917ae4cb7ed4d15e95b59e registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.1],SizeBytes:29725845,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/sample-apiserver@sha256:f9c93b92b6ff750b41a93c4e4fe0bfe384597aeb841e2539d5444815c55b2d8f registry.k8s.io/e2e-test-images/sample-apiserver:1.17.5],SizeBytes:24316368,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:122bfb8c1edabb3c0edd63f06523e6940d958d19b3957dc7b1d6f81e9f1f6119 registry.k8s.io/sig-storage/csi-provisioner:v3.1.0],SizeBytes:23345856,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:6477988532358148d2e98f7c747db4e9250bbc7ad2664bf666348abf9ee1f5aa registry.k8s.io/sig-storage/csi-provisioner:v3.0.0],SizeBytes:22728994,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:9ebbf9f023e7b41ccee3d52afe39a89e3ddacdbb69269d583abfc25847cfd9e4 registry.k8s.io/sig-storage/csi-resizer:v1.4.0],SizeBytes:22381475,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:89e900a160a986a1a7a4eba7f5259e510398fa87ca9b8a729e7dec59e04c7709 registry.k8s.io/sig-storage/csi-snapshotter:v5.0.1],SizeBytes:22163966,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:8b9c313c05f54fb04f8d430896f5f5904b6cb157df261501b29adc04d2b2dc7b registry.k8s.io/sig-storage/csi-attacher:v3.4.0],SizeBytes:22085298,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:80dec81b679a733fda448be92a2331150d99095947d04003ecff3dbd7f2a476a registry.k8s.io/sig-storage/csi-attacher:v3.3.0],SizeBytes:21444261,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:68d396900aeaa072c1f27289485fdac29834045a6f3ffe369bf389d830ef572d registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.6],SizeBytes:20293261,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:6029c252dae6178c99b580de72d7776158edbc81be0de15cedc4152a3acfed18 registry.k8s.io/sig-storage/hostpathplugin:v1.7.3],SizeBytes:15224494,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:4fd21f36075b44d1a423dfb262ad79202ce54e95f5cbc4622a6c1c38ab287ad6 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.0],SizeBytes:9132637,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:f9bcee63734b7b01555ee8fc8fb01ac2922478b2c8934bf8d468dd2916edc405 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.3.0],SizeBytes:8582494,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:8240918,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 registry.k8s.io/e2e-test-images/nginx:1.14-2],SizeBytes:6979041,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},ContainerImage{Names:[registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[kubernetes.io/csi/csi-hostpath-ephemeral-7169^16b1934b-96b7-11ed-a2d9-0a1c889d2222 kubernetes.io/csi/csi-hostpath-ephemeral-7169^16b2bfb4-96b7-11ed-a2d9-0a1c889d2222 kubernetes.io/csi/csi-mock-csi-mock-volumes-1859^02aa9583-96b7-11ed-804f-e2a286ed4273 kubernetes.io/csi/ebs.csi.aws.com^vol-080b3facb3b2d4c32 kubernetes.io/csi/ebs.csi.aws.com^vol-09e418e008e5a2790],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-ephemeral-7169^16b2bfb4-96b7-11ed-a2d9-0a1c889d2222,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/csi-mock-csi-mock-volumes-1859^02aa9583-96b7-11ed-804f-e2a286ed4273,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-ephemeral-7169^16b1934b-96b7-11ed-a2d9-0a1c889d2222,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/ebs.csi.aws.com^vol-080b3facb3b2d4c32,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/ebs.csi.aws.com^vol-09e418e008e5a2790,DevicePath:,},},Config:nil,},} Jan 17 22:35:15.297: INFO: Logging kubelet events for node i-0242e0df14fd9a246 Jan 17 22:35:15.408: INFO: Logging pods the kubelet thinks is on node i-0242e0df14fd9a246 Jan 17 22:35:15.738: INFO: simpletest.rc-ptsqn started at 2023-01-17 22:31:19 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:15.738: INFO: Container nginx ready: true, restart count 0 Jan 17 22:35:15.738: INFO: simpletest.rc-sfplv started at 2023-01-17 22:31:22 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:15.738: INFO: Container nginx ready: true, restart count 0 Jan 17 22:35:15.738: INFO: hostexec-i-0242e0df14fd9a246-g7tf7 started at 2023-01-17 22:31:48 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:15.738: INFO: Container agnhost-container ready: true, restart count 0 Jan 17 22:35:15.738: INFO: kube-proxy-i-0242e0df14fd9a246 started at 2023-01-17 22:23:22 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:15.738: INFO: Container kube-proxy ready: true, restart count 1 Jan 17 22:35:15.738: INFO: netserver-0 started at 2023-01-17 22:31:16 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:15.738: INFO: Container webserver ready: true, restart count 0 Jan 17 22:35:15.738: INFO: all-succeed-6zmzw started at 2023-01-17 22:31:29 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:15.738: INFO: Container c ready: false, restart count 0 Jan 17 22:35:15.738: INFO: simpletest.rc-72njr started at 2023-01-17 22:31:20 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:15.738: INFO: Container nginx ready: true, restart count 0 Jan 17 22:35:15.738: INFO: simpletest.rc-68pvv started at 2023-01-17 22:31:21 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:15.738: INFO: Container nginx ready: true, restart count 0 Jan 17 22:35:15.738: INFO: exec-volume-test-dynamicpv-plg8 started at 2023-01-17 22:31:10 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:15.738: INFO: Container exec-container-dynamicpv-plg8 ready: false, restart count 0 Jan 17 22:35:15.738: INFO: simpletest.rc-wz9vn started at 2023-01-17 22:31:19 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:15.738: INFO: Container nginx ready: true, restart count 0 Jan 17 22:35:15.738: INFO: simpletest.rc-6wxsg started at 2023-01-17 22:31:19 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:15.738: INFO: Container nginx ready: true, restart count 0 Jan 17 22:35:15.738: INFO: inline-volume-tester-fvxdm started at 2023-01-17 22:34:23 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:15.738: INFO: Container csi-volume-tester ready: true, restart count 0 Jan 17 22:35:15.738: INFO: simpletest.rc-w5z4k started at 2023-01-17 22:31:20 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:15.738: INFO: Container nginx ready: true, restart count 0 Jan 17 22:35:15.738: INFO: all-succeed-xwz8t started at 2023-01-17 22:31:46 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:15.738: INFO: Container c ready: false, restart count 0 Jan 17 22:35:15.738: INFO: downwardapi-volume-9863e93e-9feb-4a21-8555-7982a94f7680 started at <nil> (0+0 container statuses recorded) Jan 17 22:35:15.738: INFO: csi-hostpathplugin-0 started at <nil> (0+0 container statuses recorded) Jan 17 22:35:15.738: INFO: simpletest.rc-mws4p started at 2023-01-17 22:31:22 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:15.738: INFO: Container nginx ready: true, restart count 0 Jan 17 22:35:15.738: INFO: pod-terminate-status-1-4 started at <nil> (0+0 container statuses recorded) Jan 17 22:35:15.738: INFO: pod-projected-configmaps-3dafe73a-fa3c-4291-b696-9457aebfb2a3 started at 2023-01-17 22:31:44 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:15.738: INFO: Container agnhost-container ready: false, restart count 0 Jan 17 22:35:15.738: INFO: coredns-autoscaler-5b9dc8bb99-d9gwq started at 2023-01-17 22:33:37 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:15.738: INFO: Container autoscaler ready: true, restart count 0 Jan 17 22:35:15.738: INFO: sample-apiserver-deployment-5885c99c55-hdptf started at 2023-01-17 22:31:42 +0000 UTC (0+2 container statuses recorded) Jan 17 22:35:15.738: INFO: Container etcd ready: true, restart count 0 Jan 17 22:35:15.738: INFO: Container sample-apiserver ready: true, restart count 0 Jan 17 22:35:15.738: INFO: pod-subpath-test-dynamicpv-cqdc started at 2023-01-17 22:31:47 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:15.738: INFO: Container test-container-subpath-dynamicpv-cqdc ready: false, restart count 0 Jan 17 22:35:15.738: INFO: netserver-0 started at 2023-01-17 22:31:06 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:15.738: INFO: Container webserver ready: true, restart count 0 Jan 17 22:35:15.738: INFO: simpletest.rc-wz4hm started at 2023-01-17 22:31:19 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:15.738: INFO: Container nginx ready: true, restart count 0 Jan 17 22:35:15.738: INFO: simpletest.rc-lmwjz started at 2023-01-17 22:31:21 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:15.738: INFO: Container nginx ready: true, restart count 0 Jan 17 22:35:15.738: INFO: simpletest.rc-b7qmc started at 2023-01-17 22:31:20 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:15.738: INFO: Container nginx ready: true, restart count 0 Jan 17 22:35:15.738: INFO: hostpath-symlink-prep-provisioning-2558 started at 2023-01-17 22:31:40 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:15.738: INFO: Container init-volume-provisioning-2558 ready: false, restart count 0 Jan 17 22:35:15.738: INFO: concurrent-27899915-m2682 started at 2023-01-17 22:35:00 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:15.738: INFO: Container c ready: false, restart count 0 Jan 17 22:35:15.738: INFO: simple-27899915-f7pzr started at 2023-01-17 22:35:00 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:15.738: INFO: Container c ready: true, restart count 0 Jan 17 22:35:15.738: INFO: startup-c485c102-c6f2-4c81-bcc6-fd69419d3aff started at 2023-01-17 22:31:06 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:15.738: INFO: Container busybox ready: false, restart count 3 Jan 17 22:35:15.738: INFO: simpletest.rc-48twj started at 2023-01-17 22:31:19 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:15.738: INFO: Container nginx ready: true, restart count 0 Jan 17 22:35:15.738: INFO: simpletest.rc-pzdql started at 2023-01-17 22:31:19 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:15.738: INFO: Container nginx ready: true, restart count 0 Jan 17 22:35:15.738: INFO: test-ss-0 started at 2023-01-17 22:31:35 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:15.738: INFO: Container webserver ready: true, restart count 0 Jan 17 22:35:15.738: INFO: externalname-service-f76cf started at 2023-01-17 22:35:07 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:15.738: INFO: Container externalname-service ready: true, restart count 0 Jan 17 22:35:15.738: INFO: csi-mockplugin-attacher-0 started at 2023-01-17 22:33:31 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:15.738: INFO: Container csi-attacher ready: true, restart count 0 Jan 17 22:35:15.738: INFO: forbid-27899915-vwgk5 started at 2023-01-17 22:35:00 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:15.738: INFO: Container c ready: true, restart count 0 Jan 17 22:35:15.738: INFO: simpletest.rc-nj774 started at 2023-01-17 22:31:18 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:15.738: INFO: Container nginx ready: true, restart count 0 Jan 17 22:35:15.738: INFO: simpletest.rc-ft7tv started at 2023-01-17 22:31:19 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:15.738: INFO: Container nginx ready: true, restart count 0 Jan 17 22:35:15.738: INFO: csi-hostpathplugin-0 started at 2023-01-17 22:34:20 +0000 UTC (0+7 container statuses recorded) Jan 17 22:35:15.738: INFO: Container csi-attacher ready: true, restart count 0 Jan 17 22:35:15.738: INFO: Container csi-provisioner ready: true, restart count 0 Jan 17 22:35:15.738: INFO: Container csi-resizer ready: true, restart count 0 Jan 17 22:35:15.738: INFO: Container csi-snapshotter ready: true, restart count 0 Jan 17 22:35:15.738: INFO: Container hostpath ready: true, restart count 0 Jan 17 22:35:15.738: INFO: Container liveness-probe ready: true, restart count 0 Jan 17 22:35:15.738: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 17 22:35:15.738: INFO: pvc-volume-tester-rr4dz started at 2023-01-17 22:33:49 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:15.738: INFO: Container volume-tester ready: true, restart count 0 Jan 17 22:35:15.738: INFO: simpletest.rc-hs4vc started at 2023-01-17 22:31:19 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:15.738: INFO: Container nginx ready: true, restart count 0 Jan 17 22:35:15.738: INFO: simpletest.rc-v44gx started at 2023-01-17 22:31:19 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:15.738: INFO: Container nginx ready: true, restart count 0 Jan 17 22:35:15.738: INFO: simpletest.rc-q8rdm started at 2023-01-17 22:31:18 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:15.738: INFO: Container nginx ready: true, restart count 0 Jan 17 22:35:15.738: INFO: simpletest.rc-fp2rb started at 2023-01-17 22:31:21 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:15.738: INFO: Container nginx ready: true, restart count 0 Jan 17 22:35:15.738: INFO: test-webserver-8930ced3-275e-4cf0-a8f1-de1b1378212b started at 2023-01-17 22:34:41 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:15.738: INFO: Container test-webserver ready: false, restart count 0 Jan 17 22:35:15.738: INFO: pod-projected-secrets-b5983a6e-81db-4e4d-80e0-dcb409783d38 started at 2023-01-17 22:31:47 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:15.738: INFO: Container secret-volume-test ready: false, restart count 0 Jan 17 22:35:15.738: INFO: test-container-pod started at 2023-01-17 22:31:50 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:15.738: INFO: Container webserver ready: true, restart count 0 Jan 17 22:35:15.738: INFO: ebs-csi-node-j85wb started at 2023-01-17 22:23:42 +0000 UTC (0+3 container statuses recorded) Jan 17 22:35:15.738: INFO: Container ebs-plugin ready: true, restart count 2 Jan 17 22:35:15.738: INFO: Container liveness-probe ready: true, restart count 1 Jan 17 22:35:15.738: INFO: Container node-driver-registrar ready: true, restart count 1 Jan 17 22:35:15.738: INFO: netserver-0 started at 2023-01-17 22:34:37 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:15.738: INFO: Container webserver ready: true, restart count 0 Jan 17 22:35:15.738: INFO: pod-should-be-evicted0cd9b1c5-ccd8-45ca-a62b-8b0c9c234d0f started at 2023-01-17 22:34:38 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:15.738: INFO: Container bar ready: true, restart count 0 Jan 17 22:35:15.738: INFO: csi-mockplugin-0 started at 2023-01-17 22:33:31 +0000 UTC (0+3 container statuses recorded) Jan 17 22:35:15.738: INFO: Container csi-provisioner ready: true, restart count 0 Jan 17 22:35:15.738: INFO: Container driver-registrar ready: true, restart count 0 Jan 17 22:35:15.738: INFO: Container mock ready: true, restart count 0 Jan 17 22:35:15.738: INFO: simpletest.rc-w8xxk started at 2023-01-17 22:31:18 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:15.738: INFO: Container nginx ready: true, restart count 0 Jan 17 22:35:15.738: INFO: all-succeed-dwbk2 started at 2023-01-17 22:31:46 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:15.738: INFO: Container c ready: false, restart count 0 Jan 17 22:35:15.738: INFO: host-test-container-pod started at 2023-01-17 22:31:50 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:15.738: INFO: Container agnhost-container ready: true, restart count 0 Jan 17 22:35:15.738: INFO: inline-volume-tester-g8tc9 started at 2023-01-17 22:34:49 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:15.738: INFO: Container csi-volume-tester ready: true, restart count 0 Jan 17 22:35:15.738: INFO: busybox-0eeae1b6-87b2-4f12-8590-0524d597bfe3 started at 2023-01-17 22:31:06 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:15.738: INFO: Container busybox ready: true, restart count 0 Jan 17 22:35:15.738: INFO: simpletest.rc-j6qgk started at 2023-01-17 22:31:21 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:15.738: INFO: Container nginx ready: true, restart count 0 Jan 17 22:35:15.738: INFO: simpletest.rc-gmh7p started at 2023-01-17 22:31:21 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:15.738: INFO: Container nginx ready: true, restart count 0 Jan 17 22:35:15.738: INFO: pod-71106eba-b8fb-4dd6-9a4a-0ac7fed58546 started at <nil> (0+0 container statuses recorded) Jan 17 22:35:15.738: INFO: startup-0a47618c-fcb2-4e9e-badb-4f41f73ac997 started at 2023-01-17 22:33:27 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:15.738: INFO: Container busybox ready: false, restart count 0 Jan 17 22:35:15.738: INFO: simpletest.rc-2b882 started at 2023-01-17 22:31:18 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:15.738: INFO: Container nginx ready: true, restart count 0 Jan 17 22:35:15.738: INFO: simpletest.rc-krcwv started at 2023-01-17 22:31:18 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:15.738: INFO: Container nginx ready: true, restart count 0 Jan 17 22:35:15.738: INFO: simpletest.rc-b6kpc started at 2023-01-17 22:31:20 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:15.738: INFO: Container nginx ready: true, restart count 0 Jan 17 22:35:16.409: INFO: Latency metrics for node i-0242e0df14fd9a246 Jan 17 22:35:16.409: INFO: Logging node info for node i-0343380b4938db9ae Jan 17 22:35:16.515: INFO: Node Info: &Node{ObjectMeta:{i-0343380b4938db9ae f0ba41a2-7254-4dfe-a6dc-6961e20c2727 7945 0 2023-01-17 22:23:33 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:eu-west-1 failure-domain.beta.kubernetes.io/zone:eu-west-1a kubernetes.io/arch:amd64 kubernetes.io/hostname:i-0343380b4938db9ae kubernetes.io/os:linux node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:eu-west-1a topology.kubernetes.io/region:eu-west-1 topology.kubernetes.io/zone:eu-west-1a] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-4579":"csi-mock-csi-mock-volumes-4579","ebs.csi.aws.com":"i-0343380b4938db9ae"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{aws-cloud-controller-manager Update v1 2023-01-17 22:23:33 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:beta.kubernetes.io/instance-type":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kubelet Update v1 2023-01-17 22:23:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}}} } {kops-controller Update v1 2023-01-17 22:23:34 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:node-role.kubernetes.io/node":{}}}} } {aws-cloud-controller-manager Update v1 2023-01-17 22:23:41 +0000 UTC FieldsV1 {"f:status":{"f:addresses":{"k:{\"type\":\"ExternalDNS\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"ExternalIP\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"Hostname\"}":{"f:address":{}},"k:{\"type\":\"InternalDNS\"}":{".":{},"f:address":{},"f:type":{}}},"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2023-01-17 22:28:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.2.0/24\"":{}}}} } {kubelet Update v1 2023-01-17 22:35:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{}}},"f:status":{"f:allocatable":{"f:memory":{}},"f:capacity":{"f:memory":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{},"f:kernelVersion":{},"f:osImage":{}}}} status}]},Spec:NodeSpec{PodCIDR:100.96.2.0/24,DoNotUseExternalID:,ProviderID:aws:///eu-west-1a/i-0343380b4938db9ae,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{47441653760 0} {<nil>} 46329740Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4054806528 0} {<nil>} 3959772Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{42697488314 0} {<nil>} 42697488314 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3949948928 0} {<nil>} 3857372Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-17 22:23:41 +0000 UTC,LastTransitionTime:2023-01-17 22:23:41 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-17 22:32:38 +0000 UTC,LastTransitionTime:2023-01-17 22:23:23 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-17 22:32:38 +0000 UTC,LastTransitionTime:2023-01-17 22:23:23 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-17 22:32:38 +0000 UTC,LastTransitionTime:2023-01-17 22:23:23 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-17 22:32:38 +0000 UTC,LastTransitionTime:2023-01-17 22:28:40 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.52.8,},NodeAddress{Type:ExternalIP,Address:34.247.32.45,},NodeAddress{Type:InternalDNS,Address:i-0343380b4938db9ae.eu-west-1.compute.internal,},NodeAddress{Type:Hostname,Address:i-0343380b4938db9ae.eu-west-1.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-34-247-32-45.eu-west-1.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec22f9541f6e4acecb3762e769a2a117,SystemUUID:ec22f954-1f6e-4ace-cb37-62e769a2a117,BootID:9c8e1afb-e8db-4c0f-83ad-ba6d32d6cbde,KernelVersion:5.15.86-flatcar,OSImage:Flatcar Container Linux by Kinvolk 3446.1.0 (Oklo),ContainerRuntimeVersion:containerd://1.6.10,KubeletVersion:v1.25.5,KubeProxyVersion:v1.25.5,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.25.5],SizeBytes:63291081,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:af7e3857d87770ddb40f5ea4f89b5a2709504ab1ee31f9ea4ab5823c045f2146 registry.k8s.io/e2e-test-images/agnhost:2.40],SizeBytes:51155161,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 registry.k8s.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:40764680,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:f0c5de192d832e7c1daa6580d4a62e8fa6fc8eabc0917ae4cb7ed4d15e95b59e registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.1],SizeBytes:29725845,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:8240918,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 registry.k8s.io/e2e-test-images/nginx:1.14-2],SizeBytes:6979041,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},ContainerImage{Names:[registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 17 22:35:16.515: INFO: Logging kubelet events for node i-0343380b4938db9ae Jan 17 22:35:16.632: INFO: Logging pods the kubelet thinks is on node i-0343380b4938db9ae Jan 17 22:35:16.754: INFO: ebs-csi-node-8z29v started at 2023-01-17 22:28:31 +0000 UTC (0+3 container statuses recorded) Jan 17 22:35:16.754: INFO: Container ebs-plugin ready: true, restart count 0 Jan 17 22:35:16.754: INFO: Container liveness-probe ready: true, restart count 0 Jan 17 22:35:16.754: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 17 22:35:16.754: INFO: netserver-1 started at 2023-01-17 22:31:16 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:16.754: INFO: Container webserver ready: true, restart count 0 Jan 17 22:35:16.754: INFO: simpletest.rc-c8wrj started at 2023-01-17 22:31:19 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:16.754: INFO: Container nginx ready: true, restart count 0 Jan 17 22:35:16.754: INFO: simpletest.rc-vr9n8 started at 2023-01-17 22:31:19 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:16.754: INFO: Container nginx ready: true, restart count 0 Jan 17 22:35:16.754: INFO: simpletest.rc-lprmm started at 2023-01-17 22:31:18 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:16.754: INFO: Container nginx ready: true, restart count 0 Jan 17 22:35:16.754: INFO: simpletest.rc-8pw96 started at 2023-01-17 22:31:19 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:16.754: INFO: Container nginx ready: true, restart count 0 Jan 17 22:35:16.754: INFO: simpletest.rc-rsfll started at 2023-01-17 22:31:20 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:16.754: INFO: Container nginx ready: true, restart count 0 Jan 17 22:35:16.754: INFO: simpletest.rc-vhhwc started at 2023-01-17 22:31:21 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:16.754: INFO: Container nginx ready: true, restart count 0 Jan 17 22:35:16.754: INFO: simpletest.rc-fk8jg started at 2023-01-17 22:31:18 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:16.754: INFO: Container nginx ready: true, restart count 0 Jan 17 22:35:16.754: INFO: simpletest.rc-snjt7 started at 2023-01-17 22:31:20 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:16.754: INFO: Container nginx ready: true, restart count 0 Jan 17 22:35:16.754: INFO: simpletest.rc-jqd57 started at 2023-01-17 22:31:20 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:16.754: INFO: Container nginx ready: true, restart count 0 Jan 17 22:35:16.754: INFO: netserver-1 started at 2023-01-17 22:34:37 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:16.754: INFO: Container webserver ready: true, restart count 0 Jan 17 22:35:16.754: INFO: csi-mockplugin-0 started at 2023-01-17 22:34:58 +0000 UTC (0+4 container statuses recorded) Jan 17 22:35:16.754: INFO: Container busybox ready: true, restart count 0 Jan 17 22:35:16.754: INFO: Container csi-provisioner ready: true, restart count 0 Jan 17 22:35:16.754: INFO: Container driver-registrar ready: true, restart count 0 Jan 17 22:35:16.754: INFO: Container mock ready: true, restart count 0 Jan 17 22:35:16.754: INFO: simpletest.rc-z4fmc started at 2023-01-17 22:31:18 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:16.754: INFO: Container nginx ready: true, restart count 0 Jan 17 22:35:16.754: INFO: simpletest.rc-8gzx8 started at 2023-01-17 22:31:18 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:16.754: INFO: Container nginx ready: true, restart count 0 Jan 17 22:35:16.754: INFO: simpletest.rc-xq8h9 started at 2023-01-17 22:31:19 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:16.754: INFO: Container nginx ready: true, restart count 0 Jan 17 22:35:16.754: INFO: simpletest.rc-g75fv started at 2023-01-17 22:31:21 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:16.754: INFO: Container nginx ready: true, restart count 0 Jan 17 22:35:16.754: INFO: simpletest.rc-n8fnm started at 2023-01-17 22:31:21 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:16.754: INFO: Container nginx ready: true, restart count 0 Jan 17 22:35:16.754: INFO: coredns-85d58b74c8-4sqt8 started at 2023-01-17 22:24:10 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:16.754: INFO: Container coredns ready: true, restart count 1 Jan 17 22:35:16.754: INFO: simpletest.rc-xbpb6 started at 2023-01-17 22:31:19 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:16.754: INFO: Container nginx ready: true, restart count 0 Jan 17 22:35:16.754: INFO: simpletest.rc-6rrw4 started at 2023-01-17 22:31:20 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:16.754: INFO: Container nginx ready: true, restart count 0 Jan 17 22:35:16.754: INFO: simpletest.rc-95xz8 started at 2023-01-17 22:31:21 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:16.754: INFO: Container nginx ready: true, restart count 0 Jan 17 22:35:16.754: INFO: netserver-1 started at 2023-01-17 22:31:06 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:16.754: INFO: Container webserver ready: true, restart count 0 Jan 17 22:35:16.754: INFO: csi-mockplugin-0 started at 2023-01-17 22:35:14 +0000 UTC (0+3 container statuses recorded) Jan 17 22:35:16.754: INFO: Container csi-provisioner ready: true, restart count 0 Jan 17 22:35:16.754: INFO: Container driver-registrar ready: true, restart count 0 Jan 17 22:35:16.754: INFO: Container mock ready: true, restart count 0 Jan 17 22:35:16.754: INFO: simpletest.rc-996xj started at 2023-01-17 22:31:19 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:16.754: INFO: Container nginx ready: true, restart count 0 Jan 17 22:35:16.754: INFO: simpletest.rc-lknsp started at 2023-01-17 22:31:19 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:16.754: INFO: Container nginx ready: true, restart count 0 Jan 17 22:35:16.754: INFO: pvc-volume-tester-dz5xv started at 2023-01-17 22:35:11 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:16.754: INFO: Container volume-tester ready: false, restart count 0 Jan 17 22:35:16.754: INFO: kube-proxy-i-0343380b4938db9ae started at 2023-01-17 22:23:24 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:16.754: INFO: Container kube-proxy ready: true, restart count 1 Jan 17 22:35:16.754: INFO: csi-mockplugin-attacher-0 started at 2023-01-17 22:35:14 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:16.754: INFO: Container csi-attacher ready: false, restart count 0 Jan 17 22:35:16.754: INFO: simpletest.rc-r6j9l started at 2023-01-17 22:31:18 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:16.754: INFO: Container nginx ready: true, restart count 0 Jan 17 22:35:16.754: INFO: simpletest.rc-m4jnp started at 2023-01-17 22:31:18 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:16.754: INFO: Container nginx ready: true, restart count 0 Jan 17 22:35:16.754: INFO: simpletest.rc-b8hnr started at 2023-01-17 22:31:20 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:16.754: INFO: Container nginx ready: true, restart count 0 Jan 17 22:35:16.754: INFO: simpletest.rc-7hk4t started at 2023-01-17 22:31:21 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:16.754: INFO: Container nginx ready: true, restart count 0 Jan 17 22:35:16.754: INFO: simpletest.rc-tprcg started at 2023-01-17 22:31:22 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:16.754: INFO: Container nginx ready: true, restart count 0 Jan 17 22:35:17.373: INFO: Latency metrics for node i-0343380b4938db9ae Jan 17 22:35:17.373: INFO: Logging node info for node i-05a4ff7b848c70e4e Jan 17 22:35:17.478: INFO: Node Info: &Node{ObjectMeta:{i-05a4ff7b848c70e4e 2b1b6da2-ac5c-4314-a72a-a4fdd7d9499e 8324 0 2023-01-17 22:23:33 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:eu-west-1 failure-domain.beta.kubernetes.io/zone:eu-west-1a kubernetes.io/arch:amd64 kubernetes.io/hostname:i-05a4ff7b848c70e4e kubernetes.io/os:linux node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:eu-west-1a topology.hostpath.csi/node:i-05a4ff7b848c70e4e topology.kubernetes.io/region:eu-west-1 topology.kubernetes.io/zone:eu-west-1a] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-ephemeral-9903":"i-05a4ff7b848c70e4e","csi-mock-csi-mock-volumes-1084":"i-05a4ff7b848c70e4e","csi-mock-csi-mock-volumes-9046":"i-05a4ff7b848c70e4e","ebs.csi.aws.com":"i-05a4ff7b848c70e4e"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{aws-cloud-controller-manager Update v1 2023-01-17 22:23:33 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:beta.kubernetes.io/instance-type":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kops-controller Update v1 2023-01-17 22:23:33 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:node-role.kubernetes.io/node":{}}}} } {kubelet Update v1 2023-01-17 22:23:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}}} } {aws-cloud-controller-manager Update v1 2023-01-17 22:23:41 +0000 UTC FieldsV1 {"f:status":{"f:addresses":{"k:{\"type\":\"ExternalDNS\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"ExternalIP\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"Hostname\"}":{"f:address":{}},"k:{\"type\":\"InternalDNS\"}":{".":{},"f:address":{},"f:type":{}}},"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2023-01-17 22:33:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.1.0/24\"":{}}}} } {kubelet Update v1 2023-01-17 22:35:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{},"f:memory":{}},"f:capacity":{"f:memory":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{},"f:kernelVersion":{},"f:osImage":{}},"f:volumesInUse":{}}} status} {kube-controller-manager Update v1 2023-01-17 22:35:15 +0000 UTC FieldsV1 {"f:status":{"f:volumesAttached":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.1.0/24,DoNotUseExternalID:,ProviderID:aws:///eu-west-1a/i-05a4ff7b848c70e4e,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{47441653760 0} {<nil>} 46329740Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4054806528 0} {<nil>} 3959772Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{42697488314 0} {<nil>} 42697488314 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3949948928 0} {<nil>} 3857372Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-17 22:23:41 +0000 UTC,LastTransitionTime:2023-01-17 22:23:41 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-17 22:35:00 +0000 UTC,LastTransitionTime:2023-01-17 22:23:23 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-17 22:35:00 +0000 UTC,LastTransitionTime:2023-01-17 22:23:23 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-17 22:35:00 +0000 UTC,LastTransitionTime:2023-01-17 22:23:23 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-17 22:35:00 +0000 UTC,LastTransitionTime:2023-01-17 22:33:58 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.42.34,},NodeAddress{Type:ExternalIP,Address:54.171.91.207,},NodeAddress{Type:InternalDNS,Address:i-05a4ff7b848c70e4e.eu-west-1.compute.internal,},NodeAddress{Type:Hostname,Address:i-05a4ff7b848c70e4e.eu-west-1.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-54-171-91-207.eu-west-1.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec2b7446590104cba2b91864c81f3065,SystemUUID:ec2b7446-5901-04cb-a2b9-1864c81f3065,BootID:324b3172-1994-41e8-ab60-29098f47a2ce,KernelVersion:5.15.86-flatcar,OSImage:Flatcar Container Linux by Kinvolk 3446.1.0 (Oklo),ContainerRuntimeVersion:containerd://1.6.10,KubeletVersion:v1.25.5,KubeProxyVersion:v1.25.5,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.25.5],SizeBytes:63291081,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:af7e3857d87770ddb40f5ea4f89b5a2709504ab1ee31f9ea4ab5823c045f2146 registry.k8s.io/e2e-test-images/agnhost:2.40],SizeBytes:51155161,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:20f25f275d46aa728f7615a1ccc19c78b2ed89435bf943a44b339f70f45508e6 registry.k8s.io/e2e-test-images/httpd:2.4.39-2],SizeBytes:41902010,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 registry.k8s.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:40764680,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:f0c5de192d832e7c1daa6580d4a62e8fa6fc8eabc0917ae4cb7ed4d15e95b59e registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.1],SizeBytes:29725845,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:122bfb8c1edabb3c0edd63f06523e6940d958d19b3957dc7b1d6f81e9f1f6119 registry.k8s.io/sig-storage/csi-provisioner:v3.1.0],SizeBytes:23345856,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:6477988532358148d2e98f7c747db4e9250bbc7ad2664bf666348abf9ee1f5aa registry.k8s.io/sig-storage/csi-provisioner:v3.0.0],SizeBytes:22728994,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:89e900a160a986a1a7a4eba7f5259e510398fa87ca9b8a729e7dec59e04c7709 registry.k8s.io/sig-storage/csi-snapshotter:v5.0.1],SizeBytes:22163966,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:8b9c313c05f54fb04f8d430896f5f5904b6cb157df261501b29adc04d2b2dc7b registry.k8s.io/sig-storage/csi-attacher:v3.4.0],SizeBytes:22085298,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:6e0546563b18872b0aa0cad7255a26bb9a87cb879b7fc3e2383c867ef4f706fb registry.k8s.io/sig-storage/csi-resizer:v1.3.0],SizeBytes:21671340,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:80dec81b679a733fda448be92a2331150d99095947d04003ecff3dbd7f2a476a registry.k8s.io/sig-storage/csi-attacher:v3.3.0],SizeBytes:21444261,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:68d396900aeaa072c1f27289485fdac29834045a6f3ffe369bf389d830ef572d registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.6],SizeBytes:20293261,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:6029c252dae6178c99b580de72d7776158edbc81be0de15cedc4152a3acfed18 registry.k8s.io/sig-storage/hostpathplugin:v1.7.3],SizeBytes:15224494,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:4fd21f36075b44d1a423dfb262ad79202ce54e95f5cbc4622a6c1c38ab287ad6 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.0],SizeBytes:9132637,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:f9bcee63734b7b01555ee8fc8fb01ac2922478b2c8934bf8d468dd2916edc405 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.3.0],SizeBytes:8582494,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:8240918,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 registry.k8s.io/e2e-test-images/nginx:1.14-2],SizeBytes:6979041,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},ContainerImage{Names:[registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[kubernetes.io/csi/csi-hostpath-ephemeral-9903^260f741d-96b7-11ed-9464-369a421bbacc kubernetes.io/csi/csi-mock-csi-mock-volumes-9046^1e75744c-96b7-11ed-bf43-a263a27e041e kubernetes.io/csi/ebs.csi.aws.com^vol-06167d32c3335edac],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-mock-csi-mock-volumes-1084^35a1b117-96b7-11ed-a1b1-42b7b0f2a9ce,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/ebs.csi.aws.com^vol-06167d32c3335edac,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/csi-mock-csi-mock-volumes-9046^1e75744c-96b7-11ed-bf43-a263a27e041e,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-ephemeral-9903^260f741d-96b7-11ed-9464-369a421bbacc,DevicePath:,},},Config:nil,},} Jan 17 22:35:17.479: INFO: Logging kubelet events for node i-05a4ff7b848c70e4e Jan 17 22:35:17.588: INFO: Logging pods the kubelet thinks is on node i-05a4ff7b848c70e4e Jan 17 22:35:17.712: INFO: simpletest.rc-5nn2c started at 2023-01-17 22:31:19 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:17.712: INFO: Container nginx ready: true, restart count 1 Jan 17 22:35:17.712: INFO: simpletest.rc-h6rm8 started at 2023-01-17 22:31:19 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:17.712: INFO: Container nginx ready: false, restart count 0 Jan 17 22:35:17.712: INFO: simpletest.rc-zplng started at 2023-01-17 22:31:18 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:17.712: INFO: Container nginx ready: true, restart count 1 Jan 17 22:35:17.712: INFO: simpletest.rc-49fh7 started at 2023-01-17 22:31:19 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:17.712: INFO: Container nginx ready: true, restart count 1 Jan 17 22:35:17.712: INFO: simpletest.rc-rxp6f started at 2023-01-17 22:31:19 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:17.712: INFO: Container nginx ready: false, restart count 0 Jan 17 22:35:17.712: INFO: csi-mockplugin-resizer-0 started at 2023-01-17 22:34:05 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:17.712: INFO: Container csi-resizer ready: true, restart count 0 Jan 17 22:35:17.712: INFO: simpletest.rc-btkzj started at 2023-01-17 22:31:21 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:17.712: INFO: Container nginx ready: false, restart count 0 Jan 17 22:35:17.712: INFO: inline-volume-tester-7wcgw started at 2023-01-17 22:34:48 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:17.712: INFO: Container csi-volume-tester ready: true, restart count 0 Jan 17 22:35:17.712: INFO: hostexec-i-05a4ff7b848c70e4e-scq5f started at 2023-01-17 22:31:08 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:17.712: INFO: Container agnhost-container ready: true, restart count 0 Jan 17 22:35:17.712: INFO: simpletest.rc-lkgbr started at 2023-01-17 22:31:20 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:17.712: INFO: Container nginx ready: true, restart count 1 Jan 17 22:35:17.712: INFO: simpletest.rc-zrrmj started at 2023-01-17 22:31:22 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:17.712: INFO: Container nginx ready: true, restart count 0 Jan 17 22:35:17.712: INFO: simple-27899914-t46km started at 2023-01-17 22:33:58 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:17.712: INFO: Container c ready: true, restart count 0 Jan 17 22:35:17.712: INFO: concurrent-27899914-vl52r started at 2023-01-17 22:33:58 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:17.712: INFO: Container c ready: false, restart count 0 Jan 17 22:35:17.712: INFO: simpletest.rc-fj8bc started at 2023-01-17 22:31:19 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:17.712: INFO: Container nginx ready: false, restart count 0 Jan 17 22:35:17.712: INFO: simpletest.rc-xxx8d started at 2023-01-17 22:31:20 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:17.712: INFO: Container nginx ready: true, restart count 0 Jan 17 22:35:17.712: INFO: simpletest.rc-hmxsd started at 2023-01-17 22:31:21 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:17.712: INFO: Container nginx ready: false, restart count 0 Jan 17 22:35:17.712: INFO: simpletest.rc-hvb6x started at 2023-01-17 22:31:21 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:17.712: INFO: Container nginx ready: true, restart count 1 Jan 17 22:35:17.712: INFO: kube-proxy-i-05a4ff7b848c70e4e started at 2023-01-17 22:23:24 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:17.712: INFO: Container kube-proxy ready: true, restart count 1 Jan 17 22:35:17.712: INFO: ebs-csi-node-cbx6s started at 2023-01-17 22:33:58 +0000 UTC (0+3 container statuses recorded) Jan 17 22:35:17.712: INFO: Container ebs-plugin ready: true, restart count 0 Jan 17 22:35:17.712: INFO: Container liveness-probe ready: true, restart count 0 Jan 17 22:35:17.712: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 17 22:35:17.712: INFO: csi-mockplugin-0 started at 2023-01-17 22:34:05 +0000 UTC (0+3 container statuses recorded) Jan 17 22:35:17.712: INFO: Container csi-provisioner ready: true, restart count 0 Jan 17 22:35:17.712: INFO: Container driver-registrar ready: true, restart count 0 Jan 17 22:35:17.712: INFO: Container mock ready: true, restart count 0 Jan 17 22:35:17.712: INFO: hostexec-i-05a4ff7b848c70e4e-drgjz started at 2023-01-17 22:31:06 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:17.712: INFO: Container agnhost-container ready: false, restart count 0 Jan 17 22:35:17.712: INFO: simpletest.rc-rbrjl started at 2023-01-17 22:31:18 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:17.712: INFO: Container nginx ready: true, restart count 0 Jan 17 22:35:17.712: INFO: pod-subpath-test-preprovisionedpv-ct78 started at 2023-01-17 22:31:22 +0000 UTC (1+1 container statuses recorded) Jan 17 22:35:17.712: INFO: Init container init-volume-preprovisionedpv-ct78 ready: true, restart count 0 Jan 17 22:35:17.712: INFO: Container test-container-subpath-preprovisionedpv-ct78 ready: false, restart count 0 Jan 17 22:35:17.712: INFO: csi-mockplugin-0 started at 2023-01-17 22:35:08 +0000 UTC (0+3 container statuses recorded) Jan 17 22:35:17.712: INFO: Container csi-provisioner ready: true, restart count 0 Jan 17 22:35:17.712: INFO: Container driver-registrar ready: true, restart count 0 Jan 17 22:35:17.712: INFO: Container mock ready: true, restart count 0 Jan 17 22:35:17.712: INFO: netserver-2 started at 2023-01-17 22:31:07 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:17.712: INFO: Container webserver ready: true, restart count 1 Jan 17 22:35:17.712: INFO: simpletest.rc-867n7 started at 2023-01-17 22:31:19 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:17.712: INFO: Container nginx ready: true, restart count 1 Jan 17 22:35:17.712: INFO: csi-mockplugin-attacher-0 started at 2023-01-17 22:34:05 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:17.712: INFO: Container csi-attacher ready: true, restart count 0 Jan 17 22:35:17.712: INFO: pvc-volume-tester-zzfsc started at 2023-01-17 22:35:14 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:17.712: INFO: Container volume-tester ready: false, restart count 0 Jan 17 22:35:17.712: INFO: simpletest.rc-jlg5z started at 2023-01-17 22:31:18 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:17.712: INFO: Container nginx ready: false, restart count 0 Jan 17 22:35:17.712: INFO: simpletest.rc-9rsrp started at 2023-01-17 22:31:21 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:17.712: INFO: Container nginx ready: false, restart count 0 Jan 17 22:35:17.712: INFO: coredns-85d58b74c8-4xxft started at 2023-01-17 22:23:34 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:17.712: INFO: Container coredns ready: true, restart count 1 Jan 17 22:35:17.712: INFO: simpletest.rc-kzbsm started at 2023-01-17 22:31:18 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:17.712: INFO: Container nginx ready: true, restart count 1 Jan 17 22:35:17.712: INFO: simpletest.rc-z6nrv started at 2023-01-17 22:31:22 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:17.712: INFO: Container nginx ready: false, restart count 0 Jan 17 22:35:17.712: INFO: csi-hostpathplugin-0 started at 2023-01-17 22:34:41 +0000 UTC (0+7 container statuses recorded) Jan 17 22:35:17.712: INFO: Container csi-attacher ready: true, restart count 0 Jan 17 22:35:17.712: INFO: Container csi-provisioner ready: true, restart count 0 Jan 17 22:35:17.712: INFO: Container csi-resizer ready: true, restart count 0 Jan 17 22:35:17.712: INFO: Container csi-snapshotter ready: true, restart count 0 Jan 17 22:35:17.712: INFO: Container hostpath ready: true, restart count 0 Jan 17 22:35:17.712: INFO: Container liveness-probe ready: true, restart count 0 Jan 17 22:35:17.712: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 17 22:35:17.712: INFO: csi-mockplugin-attacher-0 started at 2023-01-17 22:35:08 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:17.712: INFO: Container csi-attacher ready: true, restart count 0 Jan 17 22:35:17.712: INFO: csi-mockplugin-resizer-0 started at 2023-01-17 22:35:08 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:17.712: INFO: Container csi-resizer ready: true, restart count 0 Jan 17 22:35:17.712: INFO: simpletest.rc-6wdx2 started at 2023-01-17 22:31:21 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:17.712: INFO: Container nginx ready: true, restart count 0 Jan 17 22:35:17.712: INFO: pvc-volume-tester-rg5b9 started at 2023-01-17 22:34:36 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:17.712: INFO: Container volume-tester ready: true, restart count 0 Jan 17 22:35:17.712: INFO: coredns-autoscaler-5b9dc8bb99-96mpn started at 2023-01-17 22:23:34 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:17.712: INFO: Container autoscaler ready: true, restart count 0 Jan 17 22:35:17.712: INFO: netserver-2 started at 2023-01-17 22:31:16 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:17.712: INFO: Container webserver ready: true, restart count 0 Jan 17 22:35:17.712: INFO: simpletest.rc-w826j started at 2023-01-17 22:31:19 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:17.712: INFO: Container nginx ready: true, restart count 1 Jan 17 22:35:17.712: INFO: simpletest.rc-956xj started at 2023-01-17 22:31:20 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:17.712: INFO: Container nginx ready: true, restart count 1 Jan 17 22:35:17.712: INFO: simpletest.rc-hbrsh started at 2023-01-17 22:31:20 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:17.712: INFO: Container nginx ready: true, restart count 1 Jan 17 22:35:17.712: INFO: netserver-2 started at 2023-01-17 22:34:37 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:17.712: INFO: Container webserver ready: true, restart count 0 Jan 17 22:35:17.712: INFO: simpletest.rc-k92b5 started at 2023-01-17 22:31:18 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:17.712: INFO: Container nginx ready: true, restart count 1 Jan 17 22:35:17.712: INFO: simpletest.rc-fzsz4 started at 2023-01-17 22:31:18 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:17.712: INFO: Container nginx ready: true, restart count 0 Jan 17 22:35:17.712: INFO: inline-volume-tester-jzh47 started at 2023-01-17 22:34:25 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:17.712: INFO: Container csi-volume-tester ready: true, restart count 0 Jan 17 22:35:17.712: INFO: simpletest.rc-68n55 started at 2023-01-17 22:31:20 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:17.712: INFO: Container nginx ready: true, restart count 0 Jan 17 22:35:18.215: INFO: Latency metrics for node i-05a4ff7b848c70e4e Jan 17 22:35:18.215: INFO: Logging node info for node i-07023e4c3916cc727 Jan 17 22:35:18.322: INFO: Node Info: &Node{ObjectMeta:{i-07023e4c3916cc727 a3ff0af6-8c1e-426e-9c46-865046711c4f 7514 0 2023-01-17 22:23:34 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:eu-west-1 failure-domain.beta.kubernetes.io/zone:eu-west-1a kubernetes.io/arch:amd64 kubernetes.io/hostname:i-07023e4c3916cc727 kubernetes.io/os:linux node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:eu-west-1a topology.hostpath.csi/node:i-07023e4c3916cc727 topology.kubernetes.io/region:eu-west-1 topology.kubernetes.io/zone:eu-west-1a] map[csi.volume.kubernetes.io/nodeid:{"ebs.csi.aws.com":"i-07023e4c3916cc727"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-17 22:23:34 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}}} } {aws-cloud-controller-manager Update v1 2023-01-17 22:23:35 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:beta.kubernetes.io/instance-type":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kops-controller Update v1 2023-01-17 22:23:35 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:node-role.kubernetes.io/node":{}}}} } {aws-cloud-controller-manager Update v1 2023-01-17 22:23:41 +0000 UTC FieldsV1 {"f:status":{"f:addresses":{"k:{\"type\":\"ExternalDNS\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"ExternalIP\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"Hostname\"}":{"f:address":{}},"k:{\"type\":\"InternalDNS\"}":{".":{},"f:address":{},"f:type":{}}},"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2023-01-17 22:27:02 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.3.0/24\"":{}}}} } {kube-controller-manager Update v1 2023-01-17 22:34:50 +0000 UTC FieldsV1 {"f:status":{"f:volumesAttached":{}}} status} {kubelet Update v1 2023-01-17 22:34:52 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:allocatable":{"f:memory":{}},"f:capacity":{"f:memory":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{},"f:kernelVersion":{},"f:osImage":{}},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.3.0/24,DoNotUseExternalID:,ProviderID:aws:///eu-west-1a/i-07023e4c3916cc727,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{47441653760 0} {<nil>} 46329740Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4054806528 0} {<nil>} 3959772Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{42697488314 0} {<nil>} 42697488314 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3949948928 0} {<nil>} 3857372Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-17 22:23:41 +0000 UTC,LastTransitionTime:2023-01-17 22:23:41 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-17 22:34:52 +0000 UTC,LastTransitionTime:2023-01-17 22:23:24 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-17 22:34:52 +0000 UTC,LastTransitionTime:2023-01-17 22:23:24 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-17 22:34:52 +0000 UTC,LastTransitionTime:2023-01-17 22:23:24 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-17 22:34:52 +0000 UTC,LastTransitionTime:2023-01-17 22:27:02 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.35.9,},NodeAddress{Type:ExternalIP,Address:34.253.197.55,},NodeAddress{Type:InternalDNS,Address:i-07023e4c3916cc727.eu-west-1.compute.internal,},NodeAddress{Type:Hostname,Address:i-07023e4c3916cc727.eu-west-1.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-34-253-197-55.eu-west-1.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec23921812a34c08be6592c8614a95a4,SystemUUID:ec239218-12a3-4c08-be65-92c8614a95a4,BootID:39a5df56-82ab-4cc6-bbc8-670bdfaa645b,KernelVersion:5.15.86-flatcar,OSImage:Flatcar Container Linux by Kinvolk 3446.1.0 (Oklo),ContainerRuntimeVersion:containerd://1.6.10,KubeletVersion:v1.25.5,KubeProxyVersion:v1.25.5,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.25.5],SizeBytes:63291081,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:af7e3857d87770ddb40f5ea4f89b5a2709504ab1ee31f9ea4ab5823c045f2146 registry.k8s.io/e2e-test-images/agnhost:2.40],SizeBytes:51155161,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:20f25f275d46aa728f7615a1ccc19c78b2ed89435bf943a44b339f70f45508e6 registry.k8s.io/e2e-test-images/httpd:2.4.39-2],SizeBytes:41902010,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 registry.k8s.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:40764680,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:f0c5de192d832e7c1daa6580d4a62e8fa6fc8eabc0917ae4cb7ed4d15e95b59e registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.1],SizeBytes:29725845,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:122bfb8c1edabb3c0edd63f06523e6940d958d19b3957dc7b1d6f81e9f1f6119 registry.k8s.io/sig-storage/csi-provisioner:v3.1.0],SizeBytes:23345856,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:9ebbf9f023e7b41ccee3d52afe39a89e3ddacdbb69269d583abfc25847cfd9e4 registry.k8s.io/sig-storage/csi-resizer:v1.4.0],SizeBytes:22381475,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:89e900a160a986a1a7a4eba7f5259e510398fa87ca9b8a729e7dec59e04c7709 registry.k8s.io/sig-storage/csi-snapshotter:v5.0.1],SizeBytes:22163966,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:8b9c313c05f54fb04f8d430896f5f5904b6cb157df261501b29adc04d2b2dc7b registry.k8s.io/sig-storage/csi-attacher:v3.4.0],SizeBytes:22085298,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:6029c252dae6178c99b580de72d7776158edbc81be0de15cedc4152a3acfed18 registry.k8s.io/sig-storage/hostpathplugin:v1.7.3],SizeBytes:15224494,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:4fd21f36075b44d1a423dfb262ad79202ce54e95f5cbc4622a6c1c38ab287ad6 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.0],SizeBytes:9132637,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:8240918,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 registry.k8s.io/e2e-test-images/nginx:1.14-2],SizeBytes:6979041,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},ContainerImage{Names:[registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[kubernetes.io/csi/ebs.csi.aws.com^vol-09e4f623092f973b6 kubernetes.io/csi/ebs.csi.aws.com^vol-0c864590c874a98e4],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/ebs.csi.aws.com^vol-0c864590c874a98e4,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/ebs.csi.aws.com^vol-09e4f623092f973b6,DevicePath:,},},Config:nil,},} Jan 17 22:35:18.322: INFO: Logging kubelet events for node i-07023e4c3916cc727 Jan 17 22:35:18.431: INFO: Logging pods the kubelet thinks is on node i-07023e4c3916cc727 Jan 17 22:35:18.546: INFO: simpletest.rc-zlk5h started at 2023-01-17 22:31:19 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:18.546: INFO: Container nginx ready: true, restart count 0 Jan 17 22:35:18.546: INFO: simpletest.rc-s946w started at 2023-01-17 22:31:21 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:18.546: INFO: Container nginx ready: true, restart count 0 Jan 17 22:35:18.546: INFO: inline-volume-tester-fvpkn started at 2023-01-17 22:34:46 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:18.546: INFO: Container csi-volume-tester ready: true, restart count 0 Jan 17 22:35:18.546: INFO: simpletest.rc-j59fh started at 2023-01-17 22:31:19 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:18.546: INFO: Container nginx ready: true, restart count 0 Jan 17 22:35:18.546: INFO: simpletest.rc-bndt8 started at 2023-01-17 22:31:22 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:18.546: INFO: Container nginx ready: true, restart count 0 Jan 17 22:35:18.546: INFO: inline-volume-tester2-zt8pl started at 2023-01-17 22:34:44 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:18.546: INFO: Container csi-volume-tester ready: true, restart count 0 Jan 17 22:35:18.546: INFO: simpletest.rc-865zr started at 2023-01-17 22:31:18 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:18.546: INFO: Container nginx ready: true, restart count 0 Jan 17 22:35:18.546: INFO: simpletest.rc-hp87b started at 2023-01-17 22:31:18 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:18.546: INFO: Container nginx ready: true, restart count 0 Jan 17 22:35:18.546: INFO: simpletest.rc-bdz2r started at 2023-01-17 22:31:19 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:18.546: INFO: Container nginx ready: true, restart count 0 Jan 17 22:35:18.546: INFO: simpletest.rc-5g2dq started at 2023-01-17 22:31:19 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:18.546: INFO: Container nginx ready: true, restart count 0 Jan 17 22:35:18.546: INFO: externalname-service-sqgxk started at 2023-01-17 22:35:07 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:18.546: INFO: Container externalname-service ready: true, restart count 0 Jan 17 22:35:18.546: INFO: simpletest.rc-g68mw started at 2023-01-17 22:31:18 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:18.546: INFO: Container nginx ready: true, restart count 0 Jan 17 22:35:18.546: INFO: simpletest.rc-sb9m9 started at 2023-01-17 22:31:22 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:18.546: INFO: Container nginx ready: true, restart count 0 Jan 17 22:35:18.546: INFO: simpletest.rc-7rhhc started at 2023-01-17 22:31:18 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:18.546: INFO: Container nginx ready: true, restart count 0 Jan 17 22:35:18.546: INFO: simpletest.rc-nx7nc started at 2023-01-17 22:31:18 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:18.546: INFO: Container nginx ready: true, restart count 0 Jan 17 22:35:18.546: INFO: simpletest.rc-szlxc started at 2023-01-17 22:31:20 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:18.546: INFO: Container nginx ready: true, restart count 0 Jan 17 22:35:18.546: INFO: simpletest.rc-b5ktr started at 2023-01-17 22:31:21 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:18.546: INFO: Container nginx ready: true, restart count 0 Jan 17 22:35:18.546: INFO: simpletest.rc-x29fd started at 2023-01-17 22:31:20 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:18.546: INFO: Container nginx ready: true, restart count 0 Jan 17 22:35:18.546: INFO: pod-subpath-test-configmap-6hmt started at 2023-01-17 22:34:53 +0000 UTC (1+2 container statuses recorded) Jan 17 22:35:18.546: INFO: Init container init-volume-configmap-6hmt ready: true, restart count 0 Jan 17 22:35:18.546: INFO: Container test-container-subpath-configmap-6hmt ready: true, restart count 2 Jan 17 22:35:18.546: INFO: Container test-container-volume-configmap-6hmt ready: true, restart count 0 Jan 17 22:35:18.546: INFO: kube-proxy-i-07023e4c3916cc727 started at 2023-01-17 22:23:25 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:18.546: INFO: Container kube-proxy ready: true, restart count 1 Jan 17 22:35:18.546: INFO: ebs-csi-node-tmp4f started at 2023-01-17 22:23:35 +0000 UTC (0+3 container statuses recorded) Jan 17 22:35:18.546: INFO: Container ebs-plugin ready: true, restart count 2 Jan 17 22:35:18.546: INFO: Container liveness-probe ready: true, restart count 1 Jan 17 22:35:18.546: INFO: Container node-driver-registrar ready: true, restart count 1 Jan 17 22:35:18.546: INFO: netserver-3 started at 2023-01-17 22:31:17 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:18.546: INFO: Container webserver ready: true, restart count 0 Jan 17 22:35:18.546: INFO: netserver-3 started at 2023-01-17 22:31:07 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:18.546: INFO: Container webserver ready: true, restart count 0 Jan 17 22:35:18.546: INFO: simpletest.rc-chvmn started at 2023-01-17 22:31:19 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:18.546: INFO: Container nginx ready: true, restart count 0 Jan 17 22:35:18.546: INFO: all-succeed-jhlhk started at 2023-01-17 22:31:29 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:18.546: INFO: Container c ready: false, restart count 0 Jan 17 22:35:18.546: INFO: simpletest.rc-v476r started at 2023-01-17 22:31:18 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:18.546: INFO: Container nginx ready: true, restart count 0 Jan 17 22:35:18.546: INFO: simpletest.rc-f5z59 started at 2023-01-17 22:31:21 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:18.546: INFO: Container nginx ready: true, restart count 0 Jan 17 22:35:18.546: INFO: simpletest.rc-bf5pk started at 2023-01-17 22:31:22 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:18.546: INFO: Container nginx ready: true, restart count 0 Jan 17 22:35:18.546: INFO: test-container-pod started at 2023-01-17 22:31:50 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:18.546: INFO: Container webserver ready: true, restart count 0 Jan 17 22:35:18.546: INFO: simpletest.rc-f2wrm started at 2023-01-17 22:31:19 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:18.546: INFO: Container nginx ready: true, restart count 0 Jan 17 22:35:18.546: INFO: simpletest.rc-9hkmg started at 2023-01-17 22:31:21 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:18.546: INFO: Container nginx ready: true, restart count 0 Jan 17 22:35:18.546: INFO: host-test-container-pod started at 2023-01-17 22:31:50 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:18.546: INFO: Container agnhost-container ready: true, restart count 0 Jan 17 22:35:18.546: INFO: netserver-3 started at 2023-01-17 22:34:37 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:18.546: INFO: Container webserver ready: true, restart count 0 Jan 17 22:35:18.546: INFO: execpod8wvjw started at <nil> (0+0 container statuses recorded) Jan 17 22:35:18.546: INFO: pod-terminate-status-2-4 started at <nil> (0+0 container statuses recorded) Jan 17 22:35:18.546: INFO: test-container-pod started at 2023-01-17 22:35:12 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:18.546: INFO: Container webserver ready: false, restart count 0 Jan 17 22:35:18.546: INFO: pod-terminate-status-1-5 started at <nil> (0+0 container statuses recorded) Jan 17 22:35:18.546: INFO: simple-27899913-dn7ml started at 2023-01-17 22:33:27 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:18.546: INFO: Container c ready: true, restart count 0 Jan 17 22:35:18.546: INFO: concurrent-27899913-r52pg started at 2023-01-17 22:33:27 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:18.546: INFO: Container c ready: false, restart count 0 Jan 17 22:35:18.546: INFO: simpletest.rc-c6pl8 started at 2023-01-17 22:31:19 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:18.546: INFO: Container nginx ready: true, restart count 0 Jan 17 22:35:18.546: INFO: simpletest.rc-nb4s4 started at 2023-01-17 22:31:20 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:18.546: INFO: Container nginx ready: true, restart count 0 Jan 17 22:35:18.546: INFO: simpletest.rc-dsbhs started at 2023-01-17 22:31:21 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:18.546: INFO: Container nginx ready: true, restart count 0 Jan 17 22:35:18.546: INFO: webhook-to-be-mutated started at 2023-01-17 22:35:02 +0000 UTC (1+1 container statuses recorded) Jan 17 22:35:18.546: INFO: Init container webhook-added-init-container ready: false, restart count 0 Jan 17 22:35:18.546: INFO: Container example ready: false, restart count 0 Jan 17 22:35:18.546: INFO: simpletest.rc-q9qr5 started at 2023-01-17 22:31:18 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:18.546: INFO: Container nginx ready: true, restart count 0 Jan 17 22:35:18.546: INFO: simpletest.rc-76wpc started at 2023-01-17 22:31:20 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:18.546: INFO: Container nginx ready: true, restart count 0 Jan 17 22:35:18.546: INFO: explicit-nonroot-uid started at 2023-01-17 22:35:11 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:18.546: INFO: Container explicit-nonroot-uid ready: false, restart count 0 Jan 17 22:35:18.962: INFO: Latency metrics for node i-07023e4c3916cc727 Jan 17 22:35:18.962: INFO: Logging node info for node i-0f4738b0932ab9299 Jan 17 22:35:19.071: INFO: Node Info: &Node{ObjectMeta:{i-0f4738b0932ab9299 93dbf5f1-6205-48f8-b119-5372216e3b73 4587 0 2023-01-17 22:22:03 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:c5.large beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:eu-west-1 failure-domain.beta.kubernetes.io/zone:eu-west-1a kops.k8s.io/kops-controller-pki: kubernetes.io/arch:amd64 kubernetes.io/hostname:i-0f4738b0932ab9299 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node.kubernetes.io/exclude-from-external-load-balancers: node.kubernetes.io/instance-type:c5.large topology.ebs.csi.aws.com/zone:eu-west-1a topology.kubernetes.io/region:eu-west-1 topology.kubernetes.io/zone:eu-west-1a] map[csi.volume.kubernetes.io/nodeid:{"ebs.csi.aws.com":"i-0f4738b0932ab9299"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-17 22:22:03 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}}} } {protokube Update v1 2023-01-17 22:22:23 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kops.k8s.io/kops-controller-pki":{},"f:node-role.kubernetes.io/control-plane":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}} } {kube-controller-manager Update v1 2023-01-17 22:22:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.0.0/24\"":{}}}} } {aws-cloud-controller-manager Update v1 2023-01-17 22:22:40 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:beta.kubernetes.io/instance-type":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:taints":{}}} } {aws-cloud-controller-manager Update v1 2023-01-17 22:22:40 +0000 UTC FieldsV1 {"f:status":{"f:addresses":{"k:{\"type\":\"ExternalDNS\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"ExternalIP\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"Hostname\"}":{"f:address":{}},"k:{\"type\":\"InternalDNS\"}":{".":{},"f:address":{},"f:type":{}}},"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2023-01-17 22:32:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{}}},"f:status":{"f:allocatable":{"f:memory":{}},"f:capacity":{"f:memory":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{},"f:kernelVersion":{},"f:osImage":{}}}} status}]},Spec:NodeSpec{PodCIDR:100.96.0.0/24,DoNotUseExternalID:,ProviderID:aws:///eu-west-1a/i-0f4738b0932ab9299,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/control-plane,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[100.96.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{47441653760 0} {<nil>} 46329740Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3895427072 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{42697488314 0} {<nil>} 42697488314 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3790569472 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-17 22:22:40 +0000 UTC,LastTransitionTime:2023-01-17 22:22:40 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-17 22:32:37 +0000 UTC,LastTransitionTime:2023-01-17 22:21:59 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-17 22:32:37 +0000 UTC,LastTransitionTime:2023-01-17 22:21:59 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-17 22:32:37 +0000 UTC,LastTransitionTime:2023-01-17 22:21:59 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-17 22:32:37 +0000 UTC,LastTransitionTime:2023-01-17 22:32:37 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.37.63,},NodeAddress{Type:ExternalIP,Address:54.78.31.51,},NodeAddress{Type:InternalDNS,Address:i-0f4738b0932ab9299.eu-west-1.compute.internal,},NodeAddress{Type:Hostname,Address:i-0f4738b0932ab9299.eu-west-1.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-54-78-31-51.eu-west-1.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec2512be5df24a68030b243f0f25f7cc,SystemUUID:ec2512be-5df2-4a68-030b-243f0f25f7cc,BootID:88d5eff3-6b58-4e67-9934-581cdce3fe94,KernelVersion:5.15.86-flatcar,OSImage:Flatcar Container Linux by Kinvolk 3446.1.0 (Oklo),ContainerRuntimeVersion:containerd://1.6.10,KubeletVersion:v1.25.5,KubeProxyVersion:v1.25.5,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/etcdadm/etcd-manager@sha256:66a453db625abb268f4b3bbefc5a34a171d81e6e8796cecca54cfd71775c77c4 registry.k8s.io/etcdadm/etcd-manager:v3.0.20221209],SizeBytes:231502799,},ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.25.5],SizeBytes:129100243,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.25.5],SizeBytes:118446393,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.25.5],SizeBytes:63291081,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.25.5],SizeBytes:51931448,},ContainerImage{Names:[registry.k8s.io/kops/kops-controller:1.26.0-beta.2],SizeBytes:43191755,},ContainerImage{Names:[registry.k8s.io/kops/dns-controller:1.26.0-beta.2],SizeBytes:42821707,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:f0c5de192d832e7c1daa6580d4a62e8fa6fc8eabc0917ae4cb7ed4d15e95b59e registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.1],SizeBytes:29725845,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:122bfb8c1edabb3c0edd63f06523e6940d958d19b3957dc7b1d6f81e9f1f6119 registry.k8s.io/sig-storage/csi-provisioner:v3.1.0],SizeBytes:23345856,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:9ebbf9f023e7b41ccee3d52afe39a89e3ddacdbb69269d583abfc25847cfd9e4 registry.k8s.io/sig-storage/csi-resizer:v1.4.0],SizeBytes:22381475,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:8b9c313c05f54fb04f8d430896f5f5904b6cb157df261501b29adc04d2b2dc7b registry.k8s.io/sig-storage/csi-attacher:v3.4.0],SizeBytes:22085298,},ContainerImage{Names:[registry.k8s.io/provider-aws/cloud-controller-manager@sha256:dcccdfba225e93ba2060a4c0b9072b50b0a564354c37bba6ed3ce89c326db58c registry.k8s.io/provider-aws/cloud-controller-manager:v1.25.2],SizeBytes:18280697,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:8240918,},ContainerImage{Names:[registry.k8s.io/kops/kube-apiserver-healthcheck:1.26.0-beta.2],SizeBytes:4965792,},ContainerImage{Names:[registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 17 22:35:19.071: INFO: Logging kubelet events for node i-0f4738b0932ab9299 Jan 17 22:35:19.188: INFO: Logging pods the kubelet thinks is on node i-0f4738b0932ab9299 Jan 17 22:35:19.305: INFO: kube-controller-manager-i-0f4738b0932ab9299 started at 2023-01-17 22:21:29 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:19.305: INFO: Container kube-controller-manager ready: true, restart count 4 Jan 17 22:35:19.305: INFO: kube-proxy-i-0f4738b0932ab9299 started at 2023-01-17 22:32:29 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:19.305: INFO: Container kube-proxy ready: true, restart count 1 Jan 17 22:35:19.305: INFO: ebs-csi-controller-696c7b9c79-9fsrb started at 2023-01-17 22:22:36 +0000 UTC (0+5 container statuses recorded) Jan 17 22:35:19.305: INFO: Container csi-attacher ready: true, restart count 2 Jan 17 22:35:19.305: INFO: Container csi-provisioner ready: true, restart count 2 Jan 17 22:35:19.305: INFO: Container csi-resizer ready: true, restart count 1 Jan 17 22:35:19.305: INFO: Container ebs-plugin ready: true, restart count 1 Jan 17 22:35:19.305: INFO: Container liveness-probe ready: true, restart count 1 Jan 17 22:35:19.305: INFO: etcd-manager-events-i-0f4738b0932ab9299 started at 2023-01-17 22:32:29 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:19.305: INFO: Container etcd-manager ready: true, restart count 1 Jan 17 22:35:19.305: INFO: kube-apiserver-i-0f4738b0932ab9299 started at 2023-01-17 22:32:29 +0000 UTC (0+2 container statuses recorded) Jan 17 22:35:19.305: INFO: Container healthcheck ready: true, restart count 1 Jan 17 22:35:19.305: INFO: Container kube-apiserver ready: true, restart count 2 Jan 17 22:35:19.305: INFO: kube-scheduler-i-0f4738b0932ab9299 started at 2023-01-17 22:32:29 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:19.305: INFO: Container kube-scheduler ready: true, restart count 1 Jan 17 22:35:19.305: INFO: dns-controller-56d4f686f6-wgj8p started at 2023-01-17 22:22:36 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:19.305: INFO: Container dns-controller ready: true, restart count 1 Jan 17 22:35:19.305: INFO: kops-controller-m2qmj started at 2023-01-17 22:22:36 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:19.305: INFO: Container kops-controller ready: true, restart count 2 Jan 17 22:35:19.305: INFO: ebs-csi-node-4zmsj started at 2023-01-17 22:22:36 +0000 UTC (0+3 container statuses recorded) Jan 17 22:35:19.305: INFO: Container ebs-plugin ready: true, restart count 1 Jan 17 22:35:19.305: INFO: Container liveness-probe ready: true, restart count 1 Jan 17 22:35:19.305: INFO: Container node-driver-registrar ready: true, restart count 1 Jan 17 22:35:19.305: INFO: aws-cloud-controller-manager-gmgnz started at 2023-01-17 22:22:36 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:19.305: INFO: Container aws-cloud-controller-manager ready: true, restart count 2 Jan 17 22:35:19.305: INFO: etcd-manager-main-i-0f4738b0932ab9299 started at 2023-01-17 22:32:29 +0000 UTC (0+1 container statuses recorded) Jan 17 22:35:19.305: INFO: Container etcd-manager ready: true, restart count 1 Jan 17 22:35:19.722: INFO: Latency metrics for node i-0f4738b0932ab9299 Jan 17 22:35:19.722: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP:�[0m Destroying namespace "volume-8642" for this suite. �[38;5;243m01/17/23 22:35:19.83�[0m
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-storage\]\sIn\-tree\sVolumes\s\[Driver\:\slocal\]\[LocalVolumeType\:\sdir\-bindmounted\]\s\[Testpattern\:\sPre\-provisioned\sPV\s\(default\sfs\)\]\ssubPath\sshould\sbe\sable\sto\sunmount\safter\sthe\ssubpath\sdirectory\sis\sdeleted\s\[LinuxOnly\]$'
test/e2e/storage/testsuites/subpath.go:178 k8s.io/kubernetes/test/e2e/storage/testsuites.(*subPathTestSuite).DefineTests.func2() test/e2e/storage/testsuites/subpath.go:178 +0x145 k8s.io/kubernetes/test/e2e/storage/testsuites.(*subPathTestSuite).DefineTests.func20() test/e2e/storage/testsuites/subpath.go:474 +0x4f7from junit_01.xml
{"msg":"FAILED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","completed":1,"skipped":14,"failed":1,"failures":["[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]"]} [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/17/23 22:31:07.657�[0m Jan 17 22:31:07.657: INFO: >>> kubeConfig: /root/.kube/config �[1mSTEP:�[0m Building a namespace api object, basename provisioning �[38;5;243m01/17/23 22:31:07.658�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/17/23 22:31:07.988�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/17/23 22:31:08.198�[0m [It] should be able to unmount after the subpath directory is deleted [LinuxOnly] test/e2e/storage/testsuites/subpath.go:446 Jan 17 22:31:08.520: INFO: In-tree plugin kubernetes.io/local-volume is not migrated, not validating any metrics Jan 17 22:31:08.628: INFO: Waiting up to 5m0s for pod "hostexec-i-05a4ff7b848c70e4e-scq5f" in namespace "provisioning-9183" to be "running" Jan 17 22:31:08.735: INFO: Pod "hostexec-i-05a4ff7b848c70e4e-scq5f": Phase="Pending", Reason="", readiness=false. Elapsed: 107.262054ms Jan 17 22:31:10.842: INFO: Pod "hostexec-i-05a4ff7b848c70e4e-scq5f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.214233978s Jan 17 22:31:12.843: INFO: Pod "hostexec-i-05a4ff7b848c70e4e-scq5f": Phase="Running", Reason="", readiness=true. Elapsed: 4.215005233s Jan 17 22:31:12.843: INFO: Pod "hostexec-i-05a4ff7b848c70e4e-scq5f" satisfied condition "running" Jan 17 22:31:12.843: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-driver-27bf42c9-7dea-443e-94cb-ab9be12ececd && mount --bind /tmp/local-driver-27bf42c9-7dea-443e-94cb-ab9be12ececd /tmp/local-driver-27bf42c9-7dea-443e-94cb-ab9be12ececd] Namespace:provisioning-9183 PodName:hostexec-i-05a4ff7b848c70e4e-scq5f ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jan 17 22:31:12.843: INFO: >>> kubeConfig: /root/.kube/config Jan 17 22:31:12.844: INFO: ExecWithOptions: Clientset creation Jan 17 22:31:12.844: INFO: ExecWithOptions: execute(POST https://api.e2e-e2e-kops-grid-kubenet-flatcar-k25-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/provisioning-9183/pods/hostexec-i-05a4ff7b848c70e4e-scq5f/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=mkdir+%2Ftmp%2Flocal-driver-27bf42c9-7dea-443e-94cb-ab9be12ececd+%26%26+mount+--bind+%2Ftmp%2Flocal-driver-27bf42c9-7dea-443e-94cb-ab9be12ececd+%2Ftmp%2Flocal-driver-27bf42c9-7dea-443e-94cb-ab9be12ececd&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) Jan 17 22:31:13.905: INFO: Creating resource for pre-provisioned PV Jan 17 22:31:13.905: INFO: Creating PVC and PV �[1mSTEP:�[0m Creating a PVC followed by a PV �[38;5;243m01/17/23 22:31:13.905�[0m Jan 17 22:31:14.121: INFO: Waiting for PV local-zxc5s to bind to PVC pvc-g447s Jan 17 22:31:14.121: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-g447s] to have phase Bound Jan 17 22:31:14.226: INFO: PersistentVolumeClaim pvc-g447s found but phase is Pending instead of Bound. Jan 17 22:31:16.333: INFO: PersistentVolumeClaim pvc-g447s found but phase is Pending instead of Bound. Jan 17 22:31:18.446: INFO: PersistentVolumeClaim pvc-g447s found but phase is Pending instead of Bound. Jan 17 22:31:20.554: INFO: PersistentVolumeClaim pvc-g447s found but phase is Pending instead of Bound. Jan 17 22:31:22.661: INFO: PersistentVolumeClaim pvc-g447s found and phase=Bound (8.539996681s) Jan 17 22:31:22.661: INFO: Waiting up to 3m0s for PersistentVolume local-zxc5s to have phase Bound Jan 17 22:31:22.768: INFO: PersistentVolume local-zxc5s found and phase=Bound (107.044309ms) �[1mSTEP:�[0m Creating pod pod-subpath-test-preprovisionedpv-llqj �[38;5;243m01/17/23 22:31:22.98�[0m Jan 17 22:31:23.093: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-llqj" in namespace "provisioning-9183" to be "running" Jan 17 22:31:23.199: INFO: Pod "pod-subpath-test-preprovisionedpv-llqj": Phase="Pending", Reason="", readiness=false. Elapsed: 105.387859ms Jan 17 22:31:25.308: INFO: Pod "pod-subpath-test-preprovisionedpv-llqj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.214388745s Jan 17 22:31:27.305: INFO: Pod "pod-subpath-test-preprovisionedpv-llqj": Phase="Pending", Reason="", readiness=false. Elapsed: 4.211428575s Jan 17 22:31:29.305: INFO: Pod "pod-subpath-test-preprovisionedpv-llqj": Phase="Pending", Reason="", readiness=false. Elapsed: 6.211191474s Jan 17 22:31:31.305: INFO: Pod "pod-subpath-test-preprovisionedpv-llqj": Phase="Pending", Reason="", readiness=false. Elapsed: 8.211771093s Jan 17 22:31:33.305: INFO: Pod "pod-subpath-test-preprovisionedpv-llqj": Phase="Pending", Reason="", readiness=false. Elapsed: 10.211344844s Jan 17 22:31:35.307: INFO: Pod "pod-subpath-test-preprovisionedpv-llqj": Phase="Pending", Reason="", readiness=false. Elapsed: 12.213231683s Jan 17 22:31:37.305: INFO: Pod "pod-subpath-test-preprovisionedpv-llqj": Phase="Pending", Reason="", readiness=false. Elapsed: 14.211384672s Jan 17 22:31:39.315: INFO: Pod "pod-subpath-test-preprovisionedpv-llqj": Phase="Pending", Reason="", readiness=false. Elapsed: 16.221524461s Jan 17 22:31:41.306: INFO: Pod "pod-subpath-test-preprovisionedpv-llqj": Phase="Pending", Reason="", readiness=false. Elapsed: 18.213026682s Jan 17 22:31:43.313: INFO: Pod "pod-subpath-test-preprovisionedpv-llqj": Phase="Pending", Reason="", readiness=false. Elapsed: 20.219995003s Jan 17 22:31:45.307: INFO: Pod "pod-subpath-test-preprovisionedpv-llqj": Phase="Running", Reason="", readiness=false. Elapsed: 22.213873415s Jan 17 22:31:45.307: INFO: Pod "pod-subpath-test-preprovisionedpv-llqj" satisfied condition "running" Jan 17 22:31:45.307: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/ad18934c-96b4-11ed-824d-f64c9135b4ea/kubectl --server=https://api.e2e-e2e-kops-grid-kubenet-flatcar-k25-ko26.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=provisioning-9183 exec pod-subpath-test-preprovisionedpv-llqj --container test-container-volume-preprovisionedpv-llqj -- /bin/sh -c rm -r /test-volume/provisioning-9183' Jan 17 22:31:46.522: INFO: stderr: "" Jan 17 22:31:46.522: INFO: stdout: "" �[1mSTEP:�[0m Deleting pod pod-subpath-test-preprovisionedpv-llqj �[38;5;243m01/17/23 22:31:46.522�[0m Jan 17 22:31:46.522: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-llqj" in namespace "provisioning-9183" Jan 17 22:31:46.694: INFO: Wait up to 5m0s for pod "pod-subpath-test-preprovisionedpv-llqj" to be fully deleted Jan 17 22:32:12.364: INFO: Encountered non-retryable error while getting pod provisioning-9183/pod-subpath-test-preprovisionedpv-llqj: Get "https://api.e2e-e2e-kops-grid-kubenet-flatcar-k25-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/provisioning-9183/pods/pod-subpath-test-preprovisionedpv-llqj": dial tcp 54.78.31.51:443: connect: connection refused - error from a previous attempt: unexpected EOF �[1mSTEP:�[0m Deleting pod �[38;5;243m01/17/23 22:32:12.364�[0m Jan 17 22:32:12.365: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-llqj" in namespace "provisioning-9183" �[1mSTEP:�[0m Deleting pv and pvc �[38;5;243m01/17/23 22:32:12.482�[0m Jan 17 22:32:12.482: INFO: Deleting PersistentVolumeClaim "pvc-g447s" Jan 17 22:32:12.599: INFO: Deleting PersistentVolume "local-zxc5s" Jan 17 22:32:12.721: FAIL: Failed to delete PVC or PV: [failed to delete PVC "pvc-g447s": PVC Delete API error: Delete "https://api.e2e-e2e-kops-grid-kubenet-flatcar-k25-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/provisioning-9183/persistentvolumeclaims/pvc-g447s": dial tcp 54.78.31.51:443: connect: connection refused, failed to delete PV "local-zxc5s": PV Delete API error: Delete "https://api.e2e-e2e-kops-grid-kubenet-flatcar-k25-ko26.test-cncf-aws.k8s.io/api/v1/persistentvolumes/local-zxc5s": dial tcp 54.78.31.51:443: connect: connection refused] Full Stack Trace k8s.io/kubernetes/test/e2e/storage/testsuites.(*subPathTestSuite).DefineTests.func2() test/e2e/storage/testsuites/subpath.go:178 +0x145 k8s.io/kubernetes/test/e2e/storage/testsuites.(*subPathTestSuite).DefineTests.func20() test/e2e/storage/testsuites/subpath.go:474 +0x4f7 [AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath test/e2e/framework/framework.go:187 �[1mSTEP:�[0m Collecting events from namespace "provisioning-9183". �[38;5;243m01/17/23 22:32:12.721�[0m Jan 17 22:32:12.840: INFO: Unexpected error: failed to list events in namespace "provisioning-9183": <*url.Error | 0xc0030b03c0>: { Op: "Get", URL: "https://api.e2e-e2e-kops-grid-kubenet-flatcar-k25-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/provisioning-9183/events", Err: <*net.OpError | 0xc002c7f680>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0024bf3b0>{IP: [54, 78, 31, 51], Port: 443, Zone: ""}, Err: <*os.SyscallError | 0xc002c70780>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Jan 17 22:32:12.840: FAIL: failed to list events in namespace "provisioning-9183": Get "https://api.e2e-e2e-kops-grid-kubenet-flatcar-k25-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/provisioning-9183/events": dial tcp 54.78.31.51:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework.dumpEventsInNamespace(0xc0030e7590, {0xc0029ffbc0, 0x11}) test/e2e/framework/util.go:901 +0x191 k8s.io/kubernetes/test/e2e/framework.DumpAllNamespaceInfo({0x7ca2818, 0xc000c7b080}, {0xc0029ffbc0, 0x11}) test/e2e/framework/util.go:919 +0x8d k8s.io/kubernetes/test/e2e/framework.NewFramework.func1(0xc00144b340, 0x3?) test/e2e/framework/framework.go:181 +0x8b k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc00144b340) test/e2e/framework/framework.go:435 +0x21d �[1mSTEP:�[0m Destroying namespace "provisioning-9183" for this suite. �[38;5;243m01/17/23 22:32:12.841�[0m Jan 17 22:32:28.415: FAIL: Couldn't delete ns: "provisioning-9183": Delete "https://api.e2e-e2e-kops-grid-kubenet-flatcar-k25-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/provisioning-9183": dial tcp 54.78.31.51:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://api.e2e-e2e-kops-grid-kubenet-flatcar-k25-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/provisioning-9183", Err:(*net.OpError)(0xc002c7fe50)}) Full Stack Trace panic({0x6ea2520, 0xc002f2d300}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1() test/e2e/framework/ginkgowrapper/wrapper.go:73 +0x7d panic({0x6ea4740, 0xc0005581c0}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail({0xc00309b700, 0x100}, {0xc0030e7048?, 0x735bfcc?, 0xc0030e7068?}) test/e2e/framework/ginkgowrapper/wrapper.go:77 +0x197 k8s.io/kubernetes/test/e2e/framework.Fail({0xc0005e23c0, 0xeb}, {0xc0030e70e0?, 0xc00308f980?, 0xc0030e7108?}) test/e2e/framework/log.go:63 +0x145 k8s.io/kubernetes/test/e2e/framework.ExpectNoErrorWithOffset(0x1, {0x7c34da0, 0xc0030b03c0}, {0xc002c707c0?, 0x0?, 0x0?}) test/e2e/framework/expect.go:76 +0x267 k8s.io/kubernetes/test/e2e/framework.ExpectNoError(...) test/e2e/framework/expect.go:43 k8s.io/kubernetes/test/e2e/framework.dumpEventsInNamespace(0xc0030e7590, {0xc0029ffbc0, 0x11}) test/e2e/framework/util.go:901 +0x191 k8s.io/kubernetes/test/e2e/framework.DumpAllNamespaceInfo({0x7ca2818, 0xc000c7b080}, {0xc0029ffbc0, 0x11}) test/e2e/framework/util.go:919 +0x8d k8s.io/kubernetes/test/e2e/framework.NewFramework.func1(0xc00144b340, 0x3?) test/e2e/framework/framework.go:181 +0x8b k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc00144b340) test/e2e/framework/framework.go:435 +0x21d
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-storage\]\sIn\-tree\sVolumes\s\[Driver\:\slocal\]\[LocalVolumeType\:\sdir\-link\]\s\[Testpattern\:\sPre\-provisioned\sPV\s\(default\sfs\)\]\ssubPath\sshould\ssupport\sfile\sas\ssubpath\s\[LinuxOnly\]$'
test/e2e/framework/util.go:843 k8s.io/kubernetes/test/e2e/framework.(*Framework).MatchContainerOutput.func1() test/e2e/framework/util.go:843 +0xa7 k8s.io/kubernetes/test/e2e/framework.(*Framework).MatchContainerOutput(0xc000c78420, 0xc001974400, {0xc0026eae40, 0x2c}, {0xc000acdec8, 0x1, 0x0?}, 0x7624530) test/e2e/framework/util.go:852 +0x22f k8s.io/kubernetes/test/e2e/framework.(*Framework).testContainerOutputMatcher(0xc001974400?, {0x73a730f?, 0x0?}, 0xc001974400, 0x0, {0xc000acdec8, 0x1, 0x1}, 0x0?) test/e2e/framework/util.go:770 +0x15f k8s.io/kubernetes/test/e2e/framework.(*Framework).TestContainerOutput(...) test/e2e/framework/framework.go:581 k8s.io/kubernetes/test/e2e/storage/testsuites.TestBasicSubpathFile(0xc000c78420?, {0xc0005a6960?, 0x11?}, 0xc001974400?, {0x737316f?, 0x0?}) test/e2e/storage/testsuites/subpath.go:490 +0x12a k8s.io/kubernetes/test/e2e/storage/testsuites.TestBasicSubpath(...) test/e2e/storage/testsuites/subpath.go:481 k8s.io/kubernetes/test/e2e/storage/testsuites.(*subPathTestSuite).DefineTests.func6() test/e2e/storage/testsuites/subpath.go:238 +0x1d8from junit_01.xml
{"msg":"FAILED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]","completed":0,"skipped":8,"failed":1,"failures":["[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]"]} [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/17/23 22:31:05.603�[0m Jan 17 22:31:05.604: INFO: >>> kubeConfig: /root/.kube/config �[1mSTEP:�[0m Building a namespace api object, basename provisioning �[38;5;243m01/17/23 22:31:05.605�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/17/23 22:31:05.927�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/17/23 22:31:06.138�[0m [It] should support file as subpath [LinuxOnly] test/e2e/storage/testsuites/subpath.go:231 Jan 17 22:31:06.560: INFO: In-tree plugin kubernetes.io/local-volume is not migrated, not validating any metrics Jan 17 22:31:06.739: INFO: Waiting up to 5m0s for pod "hostexec-i-05a4ff7b848c70e4e-drgjz" in namespace "provisioning-8796" to be "running" Jan 17 22:31:06.880: INFO: Pod "hostexec-i-05a4ff7b848c70e4e-drgjz": Phase="Pending", Reason="", readiness=false. Elapsed: 140.883155ms Jan 17 22:31:08.988: INFO: Pod "hostexec-i-05a4ff7b848c70e4e-drgjz": Phase="Pending", Reason="", readiness=false. Elapsed: 2.249129306s Jan 17 22:31:10.989: INFO: Pod "hostexec-i-05a4ff7b848c70e4e-drgjz": Phase="Pending", Reason="", readiness=false. Elapsed: 4.250306396s Jan 17 22:31:12.988: INFO: Pod "hostexec-i-05a4ff7b848c70e4e-drgjz": Phase="Pending", Reason="", readiness=false. Elapsed: 6.248878818s Jan 17 22:31:15.018: INFO: Pod "hostexec-i-05a4ff7b848c70e4e-drgjz": Phase="Running", Reason="", readiness=true. Elapsed: 8.279573588s Jan 17 22:31:15.018: INFO: Pod "hostexec-i-05a4ff7b848c70e4e-drgjz" satisfied condition "running" Jan 17 22:31:15.018: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir /tmp/local-driver-dddef41e-6bcf-4c78-ac43-5147dcedcdaf-backend && ln -s /tmp/local-driver-dddef41e-6bcf-4c78-ac43-5147dcedcdaf-backend /tmp/local-driver-dddef41e-6bcf-4c78-ac43-5147dcedcdaf] Namespace:provisioning-8796 PodName:hostexec-i-05a4ff7b848c70e4e-drgjz ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jan 17 22:31:15.018: INFO: >>> kubeConfig: /root/.kube/config Jan 17 22:31:15.019: INFO: ExecWithOptions: Clientset creation Jan 17 22:31:15.019: INFO: ExecWithOptions: execute(POST https://api.e2e-e2e-kops-grid-kubenet-flatcar-k25-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/provisioning-8796/pods/hostexec-i-05a4ff7b848c70e4e-drgjz/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=mkdir+%2Ftmp%2Flocal-driver-dddef41e-6bcf-4c78-ac43-5147dcedcdaf-backend+%26%26+ln+-s+%2Ftmp%2Flocal-driver-dddef41e-6bcf-4c78-ac43-5147dcedcdaf-backend+%2Ftmp%2Flocal-driver-dddef41e-6bcf-4c78-ac43-5147dcedcdaf&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) Jan 17 22:31:15.744: INFO: Creating resource for pre-provisioned PV Jan 17 22:31:15.744: INFO: Creating PVC and PV �[1mSTEP:�[0m Creating a PVC followed by a PV �[38;5;243m01/17/23 22:31:15.744�[0m Jan 17 22:31:15.960: INFO: Waiting for PV local-7vvl2 to bind to PVC pvc-kq486 Jan 17 22:31:15.960: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-kq486] to have phase Bound Jan 17 22:31:16.070: INFO: PersistentVolumeClaim pvc-kq486 found but phase is Pending instead of Bound. Jan 17 22:31:18.181: INFO: PersistentVolumeClaim pvc-kq486 found but phase is Pending instead of Bound. Jan 17 22:31:20.289: INFO: PersistentVolumeClaim pvc-kq486 found but phase is Pending instead of Bound. Jan 17 22:31:22.402: INFO: PersistentVolumeClaim pvc-kq486 found and phase=Bound (6.441813581s) Jan 17 22:31:22.402: INFO: Waiting up to 3m0s for PersistentVolume local-7vvl2 to have phase Bound Jan 17 22:31:22.517: INFO: PersistentVolume local-7vvl2 found and phase=Bound (115.264617ms) �[1mSTEP:�[0m Creating pod pod-subpath-test-preprovisionedpv-ct78 �[38;5;243m01/17/23 22:31:22.732�[0m �[1mSTEP:�[0m Creating a pod to test atomic-volume-subpath �[38;5;243m01/17/23 22:31:22.733�[0m Jan 17 22:31:22.847: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-ct78" in namespace "provisioning-8796" to be "Succeeded or Failed" Jan 17 22:31:22.957: INFO: Pod "pod-subpath-test-preprovisionedpv-ct78": Phase="Pending", Reason="", readiness=false. Elapsed: 109.912656ms Jan 17 22:31:25.067: INFO: Pod "pod-subpath-test-preprovisionedpv-ct78": Phase="Pending", Reason="", readiness=false. Elapsed: 2.219948521s Jan 17 22:31:27.067: INFO: Pod "pod-subpath-test-preprovisionedpv-ct78": Phase="Pending", Reason="", readiness=false. Elapsed: 4.22036285s Jan 17 22:31:29.065: INFO: Pod "pod-subpath-test-preprovisionedpv-ct78": Phase="Pending", Reason="", readiness=false. Elapsed: 6.217738683s Jan 17 22:31:31.064: INFO: Pod "pod-subpath-test-preprovisionedpv-ct78": Phase="Pending", Reason="", readiness=false. Elapsed: 8.217201915s Jan 17 22:31:33.071: INFO: Pod "pod-subpath-test-preprovisionedpv-ct78": Phase="Pending", Reason="", readiness=false. Elapsed: 10.22390416s Jan 17 22:31:35.065: INFO: Pod "pod-subpath-test-preprovisionedpv-ct78": Phase="Pending", Reason="", readiness=false. Elapsed: 12.217617602s Jan 17 22:31:37.069: INFO: Pod "pod-subpath-test-preprovisionedpv-ct78": Phase="Pending", Reason="", readiness=false. Elapsed: 14.222121344s Jan 17 22:31:39.067: INFO: Pod "pod-subpath-test-preprovisionedpv-ct78": Phase="Pending", Reason="", readiness=false. Elapsed: 16.2200457s Jan 17 22:31:41.065: INFO: Pod "pod-subpath-test-preprovisionedpv-ct78": Phase="Pending", Reason="", readiness=false. Elapsed: 18.218086896s Jan 17 22:31:43.065: INFO: Pod "pod-subpath-test-preprovisionedpv-ct78": Phase="Pending", Reason="", readiness=false. Elapsed: 20.2181741s Jan 17 22:31:45.064: INFO: Pod "pod-subpath-test-preprovisionedpv-ct78": Phase="Running", Reason="", readiness=true. Elapsed: 22.217352652s Jan 17 22:31:47.071: INFO: Pod "pod-subpath-test-preprovisionedpv-ct78": Phase="Running", Reason="", readiness=true. Elapsed: 24.223605784s Jan 17 22:31:49.066: INFO: Pod "pod-subpath-test-preprovisionedpv-ct78": Phase="Running", Reason="", readiness=true. Elapsed: 26.219041043s Jan 17 22:31:51.071: INFO: Pod "pod-subpath-test-preprovisionedpv-ct78": Phase="Running", Reason="", readiness=true. Elapsed: 28.224115542s Jan 17 22:32:12.353: INFO: Encountered non-retryable error while getting pod provisioning-8796/pod-subpath-test-preprovisionedpv-ct78: Get "https://api.e2e-e2e-kops-grid-kubenet-flatcar-k25-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/provisioning-8796/pods/pod-subpath-test-preprovisionedpv-ct78": dial tcp 54.78.31.51:443: connect: connection refused - error from a previous attempt: unexpected EOF �[1mSTEP:�[0m delete the pod �[38;5;243m01/17/23 22:32:12.473�[0m Jan 17 22:32:12.591: FAIL: Failed to delete pod "pod-subpath-test-preprovisionedpv-ct78": Delete "https://api.e2e-e2e-kops-grid-kubenet-flatcar-k25-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/provisioning-8796/pods/pod-subpath-test-preprovisionedpv-ct78": dial tcp 54.78.31.51:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).MatchContainerOutput.func1() test/e2e/framework/util.go:843 +0xa7 k8s.io/kubernetes/test/e2e/framework.(*Framework).MatchContainerOutput(0xc000c78420, 0xc001974400, {0xc0026eae40, 0x2c}, {0xc000acdec8, 0x1, 0x0?}, 0x7624530) test/e2e/framework/util.go:852 +0x22f k8s.io/kubernetes/test/e2e/framework.(*Framework).testContainerOutputMatcher(0xc001974400?, {0x73a730f?, 0x0?}, 0xc001974400, 0x0, {0xc000acdec8, 0x1, 0x1}, 0x0?) test/e2e/framework/util.go:770 +0x15f k8s.io/kubernetes/test/e2e/framework.(*Framework).TestContainerOutput(...) test/e2e/framework/framework.go:581 k8s.io/kubernetes/test/e2e/storage/testsuites.TestBasicSubpathFile(0xc000c78420?, {0xc0005a6960?, 0x11?}, 0xc001974400?, {0x737316f?, 0x0?}) test/e2e/storage/testsuites/subpath.go:490 +0x12a k8s.io/kubernetes/test/e2e/storage/testsuites.TestBasicSubpath(...) test/e2e/storage/testsuites/subpath.go:481 k8s.io/kubernetes/test/e2e/storage/testsuites.(*subPathTestSuite).DefineTests.func6() test/e2e/storage/testsuites/subpath.go:238 +0x1d8 �[1mSTEP:�[0m Deleting pod �[38;5;243m01/17/23 22:32:12.591�[0m Jan 17 22:32:12.591: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-ct78" in namespace "provisioning-8796" �[1mSTEP:�[0m Deleting pv and pvc �[38;5;243m01/17/23 22:32:12.709�[0m Jan 17 22:32:12.709: INFO: Deleting PersistentVolumeClaim "pvc-kq486" Jan 17 22:32:12.825: INFO: Deleting PersistentVolume "local-7vvl2" Jan 17 22:32:28.417: FAIL: Failed to delete PVC or PV: [failed to delete PVC "pvc-kq486": PVC Delete API error: Delete "https://api.e2e-e2e-kops-grid-kubenet-flatcar-k25-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/provisioning-8796/persistentvolumeclaims/pvc-kq486": dial tcp 54.78.31.51:443: connect: connection refused, failed to delete PV "local-7vvl2": PV Delete API error: Delete "https://api.e2e-e2e-kops-grid-kubenet-flatcar-k25-ko26.test-cncf-aws.k8s.io/api/v1/persistentvolumes/local-7vvl2": dial tcp 54.78.31.51:443: connect: connection refused] Full Stack Trace k8s.io/kubernetes/test/e2e/storage/testsuites.(*subPathTestSuite).DefineTests.func2() test/e2e/storage/testsuites/subpath.go:178 +0x145 panic({0x6ea2520, 0xc002adf140}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1() test/e2e/framework/ginkgowrapper/wrapper.go:73 +0x7d panic({0x6ea4740, 0xc0000f9810}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail({0xc002a3a640, 0x12f}, {0xc000acd9d8?, 0x735bfcc?, 0xc000acda00?}) test/e2e/framework/ginkgowrapper/wrapper.go:77 +0x197 k8s.io/kubernetes/test/e2e/framework.Failf({0x73ce3de?, 0x26?}, {0xc000acdaf0?, 0x0?, 0x0?}) test/e2e/framework/log.go:51 +0x12c k8s.io/kubernetes/test/e2e/framework.(*PodClient).DeleteSync(0xc000e8d3f8, {0xc0026eba10, 0x26}, {{{0x0, 0x0}, {0x0, 0x0}}, 0x0, 0x0, 0x0, ...}, ...) test/e2e/framework/pods.go:183 +0x195 k8s.io/kubernetes/test/e2e/framework.(*Framework).MatchContainerOutput.func1() test/e2e/framework/util.go:843 +0xa7 k8s.io/kubernetes/test/e2e/framework.(*Framework).MatchContainerOutput(0xc000c78420, 0xc001974400, {0xc0026eae40, 0x2c}, {0xc000acdec8, 0x1, 0x0?}, 0x7624530) test/e2e/framework/util.go:852 +0x22f k8s.io/kubernetes/test/e2e/framework.(*Framework).testContainerOutputMatcher(0xc001974400?, {0x73a730f?, 0x0?}, 0xc001974400, 0x0, {0xc000acdec8, 0x1, 0x1}, 0x0?) test/e2e/framework/util.go:770 +0x15f k8s.io/kubernetes/test/e2e/framework.(*Framework).TestContainerOutput(...) test/e2e/framework/framework.go:581 k8s.io/kubernetes/test/e2e/storage/testsuites.TestBasicSubpathFile(0xc000c78420?, {0xc0005a6960?, 0x11?}, 0xc001974400?, {0x737316f?, 0x0?}) test/e2e/storage/testsuites/subpath.go:490 +0x12a k8s.io/kubernetes/test/e2e/storage/testsuites.TestBasicSubpath(...) test/e2e/storage/testsuites/subpath.go:481 k8s.io/kubernetes/test/e2e/storage/testsuites.(*subPathTestSuite).DefineTests.func6() test/e2e/storage/testsuites/subpath.go:238 +0x1d8 [AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath test/e2e/framework/framework.go:187 �[1mSTEP:�[0m Collecting events from namespace "provisioning-8796". �[38;5;243m01/17/23 22:32:28.418�[0m Jan 17 22:32:28.547: INFO: Unexpected error: failed to list events in namespace "provisioning-8796": <*url.Error | 0xc0030aa390>: { Op: "Get", URL: "https://api.e2e-e2e-kops-grid-kubenet-flatcar-k25-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/provisioning-8796/events", Err: <*net.OpError | 0xc000527e00>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc002eda450>{IP: [54, 78, 31, 51], Port: 443, Zone: ""}, Err: <*os.SyscallError | 0xc00052b180>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Jan 17 22:32:28.547: FAIL: failed to list events in namespace "provisioning-8796": Get "https://api.e2e-e2e-kops-grid-kubenet-flatcar-k25-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/provisioning-8796/events": dial tcp 54.78.31.51:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework.dumpEventsInNamespace(0xc003891590, {0xc0005a6960, 0x11}) test/e2e/framework/util.go:901 +0x191 k8s.io/kubernetes/test/e2e/framework.DumpAllNamespaceInfo({0x7ca2818, 0xc000be3b00}, {0xc0005a6960, 0x11}) test/e2e/framework/util.go:919 +0x8d k8s.io/kubernetes/test/e2e/framework.NewFramework.func1(0xc000c78420, 0x3?) test/e2e/framework/framework.go:181 +0x8b k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc000c78420) test/e2e/framework/framework.go:435 +0x21d �[1mSTEP:�[0m Destroying namespace "provisioning-8796" for this suite. �[38;5;243m01/17/23 22:32:28.548�[0m Jan 17 22:32:28.665: FAIL: Couldn't delete ns: "provisioning-8796": Delete "https://api.e2e-e2e-kops-grid-kubenet-flatcar-k25-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/provisioning-8796": dial tcp 54.78.31.51:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://api.e2e-e2e-kops-grid-kubenet-flatcar-k25-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/provisioning-8796", Err:(*net.OpError)(0xc002f86e60)}) Full Stack Trace panic({0x6ea2520, 0xc002b2f640}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1() test/e2e/framework/ginkgowrapper/wrapper.go:73 +0x7d panic({0x6ea4740, 0xc00068f6c0}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail({0xc003096a00, 0x100}, {0xc003891048?, 0x735bfcc?, 0xc003891068?}) test/e2e/framework/ginkgowrapper/wrapper.go:77 +0x197 k8s.io/kubernetes/test/e2e/framework.Fail({0xc00053e1e0, 0xeb}, {0xc0038910e0?, 0xc0030b4540?, 0xc003891108?}) test/e2e/framework/log.go:63 +0x145 k8s.io/kubernetes/test/e2e/framework.ExpectNoErrorWithOffset(0x1, {0x7c34da0, 0xc0030aa390}, {0xc00052b1c0?, 0x0?, 0x0?}) test/e2e/framework/expect.go:76 +0x267 k8s.io/kubernetes/test/e2e/framework.ExpectNoError(...) test/e2e/framework/expect.go:43 k8s.io/kubernetes/test/e2e/framework.dumpEventsInNamespace(0xc003891590, {0xc0005a6960, 0x11}) test/e2e/framework/util.go:901 +0x191 k8s.io/kubernetes/test/e2e/framework.DumpAllNamespaceInfo({0x7ca2818, 0xc000be3b00}, {0xc0005a6960, 0x11}) test/e2e/framework/util.go:919 +0x8d k8s.io/kubernetes/test/e2e/framework.NewFramework.func1(0xc000c78420, 0x3?) test/e2e/framework/framework.go:181 +0x8b k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc000c78420) test/e2e/framework/framework.go:435 +0x21d
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-storage\]\sPersistentVolumes\-local\s\s\[Volume\stype\:\sdir\-link\-bindmounted\]\sOne\spod\srequesting\sone\sprebound\sPVC\sshould\sbe\sable\sto\smount\svolume\sand\sread\sfrom\spod1$'
test/e2e/storage/utils/host_exec.go:110 k8s.io/kubernetes/test/e2e/storage/utils.(*hostExecutor).launchNodeExecPod(0x761f658?, {0xc002e95830, 0x13}) test/e2e/storage/utils/host_exec.go:110 +0x445 k8s.io/kubernetes/test/e2e/storage/utils.(*hostExecutor).exec(0xc000ca6af0, {0xc000e8ac00, 0x16a}, 0xc001876400) test/e2e/storage/utils/host_exec.go:136 +0x110 k8s.io/kubernetes/test/e2e/storage/utils.(*hostExecutor).IssueCommandWithResult(0x5?, {0xc000e8ac00?, 0xc000e8ac00?}, 0xc000652d00?) test/e2e/storage/utils/host_exec.go:169 +0x33 k8s.io/kubernetes/test/e2e/storage/utils.(*hostExecutor).IssueCommand(0x7459ef7?, {0xc000e8ac00?, 0xc003919d28?}, 0x5?) test/e2e/storage/utils/host_exec.go:178 +0x1e k8s.io/kubernetes/test/e2e/storage/utils.(*ltrMgr).setupLocalVolumeDirectoryLinkBindMounted(0xc002f7eba0, 0xc001876400, 0x1?) test/e2e/storage/utils/local.go:258 +0x182 k8s.io/kubernetes/test/e2e/storage/utils.(*ltrMgr).Create(0x648eaa0?, 0xc001876400, {0x73a10e8, 0x14}, 0x0) test/e2e/storage/utils/local.go:318 +0x1b4 k8s.io/kubernetes/test/e2e/storage.setupLocalVolumes(0xc0027b39e0, {0x73a10e8, 0x14}, 0x0?, 0x1) test/e2e/storage/persistent_volumes-local.go:839 +0x13e k8s.io/kubernetes/test/e2e/storage.setupLocalVolumesPVCsPVs(0xc0027b39e0?, {0x73a10e8, 0x14}, 0xc000863980?, 0x0?, {0x73692ff, 0x9}) test/e2e/storage/persistent_volumes-local.go:1104 +0x7d k8s.io/kubernetes/test/e2e/storage.glob..func24.2.1() test/e2e/storage/persistent_volumes-local.go:202 +0xd7from junit_01.xml
{"msg":"FAILED [sig-storage] PersistentVolumes-local [Volume type: dir-link-bindmounted] One pod requesting one prebound PVC should be able to mount volume and read from pod1","completed":2,"skipped":47,"failed":1,"failures":["[sig-storage] PersistentVolumes-local [Volume type: dir-link-bindmounted] One pod requesting one prebound PVC should be able to mount volume and read from pod1"]} [BeforeEach] [sig-storage] PersistentVolumes-local test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/17/23 22:31:47.469�[0m Jan 17 22:31:47.469: INFO: >>> kubeConfig: /root/.kube/config �[1mSTEP:�[0m Building a namespace api object, basename persistent-local-volumes-test �[38;5;243m01/17/23 22:31:47.47�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/17/23 22:31:47.788�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/17/23 22:31:47.997�[0m [BeforeEach] [sig-storage] PersistentVolumes-local test/e2e/storage/persistent_volumes-local.go:160 [BeforeEach] [Volume type: dir-link-bindmounted] test/e2e/storage/persistent_volumes-local.go:197 �[1mSTEP:�[0m Initializing test volumes �[38;5;243m01/17/23 22:31:48.422�[0m Jan 17 22:31:48.533: INFO: Waiting up to 5m0s for pod "hostexec-i-0242e0df14fd9a246-g7tf7" in namespace "persistent-local-volumes-test-5361" to be "running" Jan 17 22:31:48.638: INFO: Pod "hostexec-i-0242e0df14fd9a246-g7tf7": Phase="Pending", Reason="", readiness=false. Elapsed: 105.710981ms Jan 17 22:31:50.747: INFO: Pod "hostexec-i-0242e0df14fd9a246-g7tf7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.21438727s Jan 17 22:32:12.365: INFO: Encountered non-retryable error while getting pod persistent-local-volumes-test-5361/hostexec-i-0242e0df14fd9a246-g7tf7: Get "https://api.e2e-e2e-kops-grid-kubenet-flatcar-k25-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/persistent-local-volumes-test-5361/pods/hostexec-i-0242e0df14fd9a246-g7tf7": dial tcp 54.78.31.51:443: connect: connection refused - error from a previous attempt: unexpected EOF Jan 17 22:32:12.365: INFO: Unexpected error: <*fmt.wrapError | 0xc002f21580>: { msg: "error while waiting for pod persistent-local-volumes-test-5361/hostexec-i-0242e0df14fd9a246-g7tf7 to be running: Get \"https://api.e2e-e2e-kops-grid-kubenet-flatcar-k25-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/persistent-local-volumes-test-5361/pods/hostexec-i-0242e0df14fd9a246-g7tf7\": dial tcp 54.78.31.51:443: connect: connection refused - error from a previous attempt: unexpected EOF", err: <*rest.wrapPreviousError | 0xc002f21560>{ currentErr: <*url.Error | 0xc002f7f410>{ Op: "Get", URL: "https://api.e2e-e2e-kops-grid-kubenet-flatcar-k25-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/persistent-local-volumes-test-5361/pods/hostexec-i-0242e0df14fd9a246-g7tf7", Err: <*net.OpError | 0xc002fe4190>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc003524540>{IP: [54, 78, 31, 51], Port: 443, Zone: ""}, Err: <*os.SyscallError | 0xc002f21520>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, }, previousError: <*errors.errorString | 0xc0000c6130>{s: "unexpected EOF"}, }, } Jan 17 22:32:12.365: FAIL: error while waiting for pod persistent-local-volumes-test-5361/hostexec-i-0242e0df14fd9a246-g7tf7 to be running: Get "https://api.e2e-e2e-kops-grid-kubenet-flatcar-k25-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/persistent-local-volumes-test-5361/pods/hostexec-i-0242e0df14fd9a246-g7tf7": dial tcp 54.78.31.51:443: connect: connection refused - error from a previous attempt: unexpected EOF Full Stack Trace k8s.io/kubernetes/test/e2e/storage/utils.(*hostExecutor).launchNodeExecPod(0x761f658?, {0xc002e95830, 0x13}) test/e2e/storage/utils/host_exec.go:110 +0x445 k8s.io/kubernetes/test/e2e/storage/utils.(*hostExecutor).exec(0xc000ca6af0, {0xc000e8ac00, 0x16a}, 0xc001876400) test/e2e/storage/utils/host_exec.go:136 +0x110 k8s.io/kubernetes/test/e2e/storage/utils.(*hostExecutor).IssueCommandWithResult(0x5?, {0xc000e8ac00?, 0xc000e8ac00?}, 0xc000652d00?) test/e2e/storage/utils/host_exec.go:169 +0x33 k8s.io/kubernetes/test/e2e/storage/utils.(*hostExecutor).IssueCommand(0x7459ef7?, {0xc000e8ac00?, 0xc003919d28?}, 0x5?) test/e2e/storage/utils/host_exec.go:178 +0x1e k8s.io/kubernetes/test/e2e/storage/utils.(*ltrMgr).setupLocalVolumeDirectoryLinkBindMounted(0xc002f7eba0, 0xc001876400, 0x1?) test/e2e/storage/utils/local.go:258 +0x182 k8s.io/kubernetes/test/e2e/storage/utils.(*ltrMgr).Create(0x648eaa0?, 0xc001876400, {0x73a10e8, 0x14}, 0x0) test/e2e/storage/utils/local.go:318 +0x1b4 k8s.io/kubernetes/test/e2e/storage.setupLocalVolumes(0xc0027b39e0, {0x73a10e8, 0x14}, 0x0?, 0x1) test/e2e/storage/persistent_volumes-local.go:839 +0x13e k8s.io/kubernetes/test/e2e/storage.setupLocalVolumesPVCsPVs(0xc0027b39e0?, {0x73a10e8, 0x14}, 0xc000863980?, 0x0?, {0x73692ff, 0x9}) test/e2e/storage/persistent_volumes-local.go:1104 +0x7d k8s.io/kubernetes/test/e2e/storage.glob..func24.2.1() test/e2e/storage/persistent_volumes-local.go:202 +0xd7 [AfterEach] [Volume type: dir-link-bindmounted] test/e2e/storage/persistent_volumes-local.go:206 �[1mSTEP:�[0m Cleaning up PVC and PV �[38;5;243m01/17/23 22:32:12.366�[0m [AfterEach] [sig-storage] PersistentVolumes-local test/e2e/framework/framework.go:187 �[1mSTEP:�[0m Collecting events from namespace "persistent-local-volumes-test-5361". �[38;5;243m01/17/23 22:32:12.366�[0m Jan 17 22:32:12.483: INFO: Unexpected error: failed to list events in namespace "persistent-local-volumes-test-5361": <*url.Error | 0xc0034af170>: { Op: "Get", URL: "https://api.e2e-e2e-kops-grid-kubenet-flatcar-k25-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/persistent-local-volumes-test-5361/events", Err: <*net.OpError | 0xc001b6dc20>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc00361e090>{IP: [54, 78, 31, 51], Port: 443, Zone: ""}, Err: <*os.SyscallError | 0xc0034a3320>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Jan 17 22:32:12.484: FAIL: failed to list events in namespace "persistent-local-volumes-test-5361": Get "https://api.e2e-e2e-kops-grid-kubenet-flatcar-k25-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/persistent-local-volumes-test-5361/events": dial tcp 54.78.31.51:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework.dumpEventsInNamespace(0xc003533590, {0xc002778900, 0x22}) test/e2e/framework/util.go:901 +0x191 k8s.io/kubernetes/test/e2e/framework.DumpAllNamespaceInfo({0x7ca2818, 0xc0021ef680}, {0xc002778900, 0x22}) test/e2e/framework/util.go:919 +0x8d k8s.io/kubernetes/test/e2e/framework.NewFramework.func1(0xc00153adc0, 0x3?) test/e2e/framework/framework.go:181 +0x8b k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc00153adc0) test/e2e/framework/framework.go:435 +0x21d �[1mSTEP:�[0m Destroying namespace "persistent-local-volumes-test-5361" for this suite. �[38;5;243m01/17/23 22:32:12.484�[0m Jan 17 22:32:12.600: FAIL: Couldn't delete ns: "persistent-local-volumes-test-5361": Delete "https://api.e2e-e2e-kops-grid-kubenet-flatcar-k25-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/persistent-local-volumes-test-5361": dial tcp 54.78.31.51:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://api.e2e-e2e-kops-grid-kubenet-flatcar-k25-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/persistent-local-volumes-test-5361", Err:(*net.OpError)(0xc002fe4870)}) Full Stack Trace panic({0x6ea2520, 0xc0034a7100}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1() test/e2e/framework/ginkgowrapper/wrapper.go:73 +0x7d panic({0x6ea4740, 0xc0005b51f0}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail({0xc0007b0f00, 0x122}, {0xc003533048?, 0x735bfcc?, 0xc003533068?}) test/e2e/framework/ginkgowrapper/wrapper.go:77 +0x197 k8s.io/kubernetes/test/e2e/framework.Fail({0xc0034be000, 0x10d}, {0xc0035330e0?, 0xc000400d00?, 0xc003533108?}) test/e2e/framework/log.go:63 +0x145 k8s.io/kubernetes/test/e2e/framework.ExpectNoErrorWithOffset(0x1, {0x7c34da0, 0xc0034af170}, {0xc0034a3360?, 0x0?, 0x0?}) test/e2e/framework/expect.go:76 +0x267 k8s.io/kubernetes/test/e2e/framework.ExpectNoError(...) test/e2e/framework/expect.go:43 k8s.io/kubernetes/test/e2e/framework.dumpEventsInNamespace(0xc003533590, {0xc002778900, 0x22}) test/e2e/framework/util.go:901 +0x191 k8s.io/kubernetes/test/e2e/framework.DumpAllNamespaceInfo({0x7ca2818, 0xc0021ef680}, {0xc002778900, 0x22}) test/e2e/framework/util.go:919 +0x8d k8s.io/kubernetes/test/e2e/framework.NewFramework.func1(0xc00153adc0, 0x3?) test/e2e/framework/framework.go:181 +0x8b k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc00153adc0) test/e2e/framework/framework.go:435 +0x21d
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-storage\]\sProjected\sconfigMap\sshould\sbe\sconsumable\sfrom\spods\sin\svolume\sas\snon\-root\swith\sFSGroup\s\[LinuxOnly\]\s\[NodeFeature\:FSGroup\]$'
test/e2e/framework/util.go:843 k8s.io/kubernetes/test/e2e/framework.(*Framework).MatchContainerOutput.func1() test/e2e/framework/util.go:843 +0xa7 k8s.io/kubernetes/test/e2e/framework.(*Framework).MatchContainerOutput(0xc000e1a000, 0xc001546000, {0x738e5f0, 0x11}, {0xc00259ff10, 0x2, 0xc002f62320?}, 0x7624538) test/e2e/framework/util.go:852 +0x22f k8s.io/kubernetes/test/e2e/framework.(*Framework).testContainerOutputMatcher(0x74204ac?, {0x739485d?, 0x2?}, 0xc001546000, 0x0, {0xc00259ff10, 0x2, 0x2}, 0x44?) test/e2e/framework/util.go:770 +0x15f k8s.io/kubernetes/test/e2e/framework.(*Framework).TestContainerOutputRegexp(...) test/e2e/framework/framework.go:588 k8s.io/kubernetes/test/e2e/common/storage.doProjectedConfigMapE2EWithoutMappings(0xc000e1a000, 0x1, 0x3e9, 0x0) test/e2e/common/storage/projected_configmap.go:515 +0x565 k8s.io/kubernetes/test/e2e/common/storage.glob..func7.5() test/e2e/common/storage/projected_configmap.go:80 +0x5efrom junit_01.xml
{"msg":"FAILED [sig-storage] Projected configMap should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]","completed":2,"skipped":36,"failed":1,"failures":["[sig-storage] Projected configMap should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]"]} [BeforeEach] [sig-storage] Projected configMap test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/17/23 22:31:43.538�[0m Jan 17 22:31:43.538: INFO: >>> kubeConfig: /root/.kube/config �[1mSTEP:�[0m Building a namespace api object, basename projected �[38;5;243m01/17/23 22:31:43.539�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/17/23 22:31:43.863�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/17/23 22:31:44.076�[0m [It] should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup] test/e2e/common/storage/projected_configmap.go:77 �[1mSTEP:�[0m Creating configMap with name projected-configmap-test-volume-9949168b-9ca0-4dc1-aebf-8df55d0bffb0 �[38;5;243m01/17/23 22:31:44.29�[0m �[1mSTEP:�[0m Creating a pod to test consume configMaps �[38;5;243m01/17/23 22:31:44.398�[0m Jan 17 22:31:44.511: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-3dafe73a-fa3c-4291-b696-9457aebfb2a3" in namespace "projected-4669" to be "Succeeded or Failed" Jan 17 22:31:44.619: INFO: Pod "pod-projected-configmaps-3dafe73a-fa3c-4291-b696-9457aebfb2a3": Phase="Pending", Reason="", readiness=false. Elapsed: 107.189216ms Jan 17 22:31:46.739: INFO: Pod "pod-projected-configmaps-3dafe73a-fa3c-4291-b696-9457aebfb2a3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.227795186s Jan 17 22:31:48.726: INFO: Pod "pod-projected-configmaps-3dafe73a-fa3c-4291-b696-9457aebfb2a3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.214513522s Jan 17 22:31:50.731: INFO: Pod "pod-projected-configmaps-3dafe73a-fa3c-4291-b696-9457aebfb2a3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.219532977s Jan 17 22:32:12.361: INFO: Encountered non-retryable error while getting pod projected-4669/pod-projected-configmaps-3dafe73a-fa3c-4291-b696-9457aebfb2a3: Get "https://api.e2e-e2e-kops-grid-kubenet-flatcar-k25-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/projected-4669/pods/pod-projected-configmaps-3dafe73a-fa3c-4291-b696-9457aebfb2a3": dial tcp 54.78.31.51:443: connect: connection refused - error from a previous attempt: unexpected EOF �[1mSTEP:�[0m delete the pod �[38;5;243m01/17/23 22:32:12.48�[0m Jan 17 22:32:12.598: FAIL: Failed to delete pod "pod-projected-configmaps-3dafe73a-fa3c-4291-b696-9457aebfb2a3": Delete "https://api.e2e-e2e-kops-grid-kubenet-flatcar-k25-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/projected-4669/pods/pod-projected-configmaps-3dafe73a-fa3c-4291-b696-9457aebfb2a3": dial tcp 54.78.31.51:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).MatchContainerOutput.func1() test/e2e/framework/util.go:843 +0xa7 k8s.io/kubernetes/test/e2e/framework.(*Framework).MatchContainerOutput(0xc000e1a000, 0xc001546000, {0x738e5f0, 0x11}, {0xc00259ff10, 0x2, 0xc002f62320?}, 0x7624538) test/e2e/framework/util.go:852 +0x22f k8s.io/kubernetes/test/e2e/framework.(*Framework).testContainerOutputMatcher(0x74204ac?, {0x739485d?, 0x2?}, 0xc001546000, 0x0, {0xc00259ff10, 0x2, 0x2}, 0x44?) test/e2e/framework/util.go:770 +0x15f k8s.io/kubernetes/test/e2e/framework.(*Framework).TestContainerOutputRegexp(...) test/e2e/framework/framework.go:588 k8s.io/kubernetes/test/e2e/common/storage.doProjectedConfigMapE2EWithoutMappings(0xc000e1a000, 0x1, 0x3e9, 0x0) test/e2e/common/storage/projected_configmap.go:515 +0x565 k8s.io/kubernetes/test/e2e/common/storage.glob..func7.5() test/e2e/common/storage/projected_configmap.go:80 +0x5e [AfterEach] [sig-storage] Projected configMap test/e2e/framework/framework.go:187 �[1mSTEP:�[0m Collecting events from namespace "projected-4669". �[38;5;243m01/17/23 22:32:12.599�[0m Jan 17 22:32:12.725: INFO: Unexpected error: failed to list events in namespace "projected-4669": <*url.Error | 0xc003c82840>: { Op: "Get", URL: "https://api.e2e-e2e-kops-grid-kubenet-flatcar-k25-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/projected-4669/events", Err: <*net.OpError | 0xc003a27040>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc003d0a6f0>{IP: [54, 78, 31, 51], Port: 443, Zone: ""}, Err: <*os.SyscallError | 0xc003a13080>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Jan 17 22:32:12.725: FAIL: failed to list events in namespace "projected-4669": Get "https://api.e2e-e2e-kops-grid-kubenet-flatcar-k25-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/projected-4669/events": dial tcp 54.78.31.51:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework.dumpEventsInNamespace(0xc002b69590, {0xc003a5a030, 0xe}) test/e2e/framework/util.go:901 +0x191 k8s.io/kubernetes/test/e2e/framework.DumpAllNamespaceInfo({0x7ca2818, 0xc003b42c00}, {0xc003a5a030, 0xe}) test/e2e/framework/util.go:919 +0x8d k8s.io/kubernetes/test/e2e/framework.NewFramework.func1(0xc000e1a000, 0x1?) test/e2e/framework/framework.go:181 +0x8b k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc000e1a000) test/e2e/framework/framework.go:435 +0x21d �[1mSTEP:�[0m Destroying namespace "projected-4669" for this suite. �[38;5;243m01/17/23 22:32:12.726�[0m Jan 17 22:32:12.846: FAIL: Couldn't delete ns: "projected-4669": Delete "https://api.e2e-e2e-kops-grid-kubenet-flatcar-k25-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/projected-4669": dial tcp 54.78.31.51:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://api.e2e-e2e-kops-grid-kubenet-flatcar-k25-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/projected-4669", Err:(*net.OpError)(0xc003a274a0)}) Full Stack Trace panic({0x6ea2520, 0xc003c8c540}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1() test/e2e/framework/ginkgowrapper/wrapper.go:73 +0x7d panic({0x6ea4740, 0xc00072ae70}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail({0xc003c86d00, 0xfa}, {0xc002b69048?, 0x735bfcc?, 0xc002b69068?}) test/e2e/framework/ginkgowrapper/wrapper.go:77 +0x197 k8s.io/kubernetes/test/e2e/framework.Fail({0xc002a9cd20, 0xe5}, {0xc002b690e0?, 0xc003a7c420?, 0xc002b69108?}) test/e2e/framework/log.go:63 +0x145 k8s.io/kubernetes/test/e2e/framework.ExpectNoErrorWithOffset(0x1, {0x7c34da0, 0xc003c82840}, {0xc003a130c0?, 0x0?, 0x0?}) test/e2e/framework/expect.go:76 +0x267 k8s.io/kubernetes/test/e2e/framework.ExpectNoError(...) test/e2e/framework/expect.go:43 k8s.io/kubernetes/test/e2e/framework.dumpEventsInNamespace(0xc002b69590, {0xc003a5a030, 0xe}) test/e2e/framework/util.go:901 +0x191 k8s.io/kubernetes/test/e2e/framework.DumpAllNamespaceInfo({0x7ca2818, 0xc003b42c00}, {0xc003a5a030, 0xe}) test/e2e/framework/util.go:919 +0x8d k8s.io/kubernetes/test/e2e/framework.NewFramework.func1(0xc000e1a000, 0x1?) test/e2e/framework/framework.go:181 +0x8b k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc000e1a000) test/e2e/framework/framework.go:435 +0x21d
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-storage\]\sProjected\ssecret\sshould\sbe\sconsumable\sin\smultiple\svolumes\sin\sa\spod\s\[NodeConformance\]\s\[Conformance\]$'
test/e2e/framework/util.go:843 k8s.io/kubernetes/test/e2e/framework.(*Framework).MatchContainerOutput.func1() test/e2e/framework/util.go:843 +0xa7 k8s.io/kubernetes/test/e2e/framework.(*Framework).MatchContainerOutput(0xc000c63e40, 0xc00179c800, {0x7395e4d, 0x12}, {0xc0017d9f18, 0x2, 0xc0015fa550?}, 0x7624538) test/e2e/framework/util.go:852 +0x22f k8s.io/kubernetes/test/e2e/framework.(*Framework).testContainerOutputMatcher(0x740aa47?, {0x7382eed?, 0xc0017d9f58?}, 0xc00179c800, 0x0, {0xc0017d9f18, 0x2, 0x2}, 0x0?) test/e2e/framework/util.go:770 +0x15f k8s.io/kubernetes/test/e2e/framework.(*Framework).TestContainerOutputRegexp(...) test/e2e/framework/framework.go:588 k8s.io/kubernetes/test/e2e/common/storage.glob..func9.7() test/e2e/common/storage/projected_secret.go:203 +0xab0from junit_01.xml
{"msg":"FAILED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","completed":1,"skipped":32,"failed":1,"failures":["[sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]"]} [BeforeEach] [sig-storage] Projected secret test/e2e/framework/framework.go:186 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/17/23 22:31:47.05�[0m Jan 17 22:31:47.050: INFO: >>> kubeConfig: /root/.kube/config �[1mSTEP:�[0m Building a namespace api object, basename projected �[38;5;243m01/17/23 22:31:47.051�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/17/23 22:31:47.378�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/17/23 22:31:47.592�[0m [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] test/e2e/common/storage/projected_secret.go:118 �[1mSTEP:�[0m Creating secret with name projected-secret-test-65cb1fcb-c234-400e-a12a-a8a038e02064 �[38;5;243m01/17/23 22:31:47.803�[0m �[1mSTEP:�[0m Creating a pod to test consume secrets �[38;5;243m01/17/23 22:31:47.912�[0m Jan 17 22:31:48.023: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-b5983a6e-81db-4e4d-80e0-dcb409783d38" in namespace "projected-1534" to be "Succeeded or Failed" Jan 17 22:31:48.129: INFO: Pod "pod-projected-secrets-b5983a6e-81db-4e4d-80e0-dcb409783d38": Phase="Pending", Reason="", readiness=false. Elapsed: 106.243781ms Jan 17 22:31:50.238: INFO: Pod "pod-projected-secrets-b5983a6e-81db-4e4d-80e0-dcb409783d38": Phase="Pending", Reason="", readiness=false. Elapsed: 2.214604157s Jan 17 22:32:12.365: INFO: Encountered non-retryable error while getting pod projected-1534/pod-projected-secrets-b5983a6e-81db-4e4d-80e0-dcb409783d38: Get "https://api.e2e-e2e-kops-grid-kubenet-flatcar-k25-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/projected-1534/pods/pod-projected-secrets-b5983a6e-81db-4e4d-80e0-dcb409783d38": dial tcp 54.78.31.51:443: connect: connection refused - error from a previous attempt: unexpected EOF �[1mSTEP:�[0m delete the pod �[38;5;243m01/17/23 22:32:12.486�[0m Jan 17 22:32:12.606: FAIL: Failed to delete pod "pod-projected-secrets-b5983a6e-81db-4e4d-80e0-dcb409783d38": Delete "https://api.e2e-e2e-kops-grid-kubenet-flatcar-k25-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/projected-1534/pods/pod-projected-secrets-b5983a6e-81db-4e4d-80e0-dcb409783d38": dial tcp 54.78.31.51:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).MatchContainerOutput.func1() test/e2e/framework/util.go:843 +0xa7 k8s.io/kubernetes/test/e2e/framework.(*Framework).MatchContainerOutput(0xc000c63e40, 0xc00179c800, {0x7395e4d, 0x12}, {0xc0017d9f18, 0x2, 0xc0015fa550?}, 0x7624538) test/e2e/framework/util.go:852 +0x22f k8s.io/kubernetes/test/e2e/framework.(*Framework).testContainerOutputMatcher(0x740aa47?, {0x7382eed?, 0xc0017d9f58?}, 0xc00179c800, 0x0, {0xc0017d9f18, 0x2, 0x2}, 0x0?) test/e2e/framework/util.go:770 +0x15f k8s.io/kubernetes/test/e2e/framework.(*Framework).TestContainerOutputRegexp(...) test/e2e/framework/framework.go:588 k8s.io/kubernetes/test/e2e/common/storage.glob..func9.7() test/e2e/common/storage/projected_secret.go:203 +0xab0 [AfterEach] [sig-storage] Projected secret test/e2e/framework/framework.go:187 �[1mSTEP:�[0m Collecting events from namespace "projected-1534". �[38;5;243m01/17/23 22:32:12.607�[0m Jan 17 22:32:12.728: INFO: Unexpected error: failed to list events in namespace "projected-1534": <*url.Error | 0xc001c46390>: { Op: "Get", URL: "https://api.e2e-e2e-kops-grid-kubenet-flatcar-k25-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/projected-1534/events", Err: <*net.OpError | 0xc001789360>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc001acc690>{IP: [54, 78, 31, 51], Port: 443, Zone: ""}, Err: <*os.SyscallError | 0xc0004cde20>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Jan 17 22:32:12.728: FAIL: failed to list events in namespace "projected-1534": Get "https://api.e2e-e2e-kops-grid-kubenet-flatcar-k25-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/projected-1534/events": dial tcp 54.78.31.51:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework.dumpEventsInNamespace(0xc002d39590, {0xc000b8a870, 0xe}) test/e2e/framework/util.go:901 +0x191 k8s.io/kubernetes/test/e2e/framework.DumpAllNamespaceInfo({0x7ca2818, 0xc000bc6d80}, {0xc000b8a870, 0xe}) test/e2e/framework/util.go:919 +0x8d k8s.io/kubernetes/test/e2e/framework.NewFramework.func1(0xc000c63e40, 0x1?) test/e2e/framework/framework.go:181 +0x8b k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc000c63e40) test/e2e/framework/framework.go:435 +0x21d �[1mSTEP:�[0m Destroying namespace "projected-1534" for this suite. �[38;5;243m01/17/23 22:32:12.728�[0m Jan 17 22:32:12.847: FAIL: Couldn't delete ns: "projected-1534": Delete "https://api.e2e-e2e-kops-grid-kubenet-flatcar-k25-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/projected-1534": dial tcp 54.78.31.51:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://api.e2e-e2e-kops-grid-kubenet-flatcar-k25-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/projected-1534", Err:(*net.OpError)(0xc0017897c0)}) Full Stack Trace panic({0x6ea2520, 0xc002a29300}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1() test/e2e/framework/ginkgowrapper/wrapper.go:73 +0x7d panic({0x6ea4740, 0xc0004cea10}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail({0xc000026c00, 0xfa}, {0xc002d39048?, 0x735bfcc?, 0xc002d39068?}) test/e2e/framework/ginkgowrapper/wrapper.go:77 +0x197 k8s.io/kubernetes/test/e2e/framework.Fail({0xc00386eb40, 0xe5}, {0xc002d390e0?, 0xc000fee2c0?, 0xc002d39108?}) test/e2e/framework/log.go:63 +0x145 k8s.io/kubernetes/test/e2e/framework.ExpectNoErrorWithOffset(0x1, {0x7c34da0, 0xc001c46390}, {0xc0004cde60?, 0x0?, 0x0?}) test/e2e/framework/expect.go:76 +0x267 k8s.io/kubernetes/test/e2e/framework.ExpectNoError(...) test/e2e/framework/expect.go:43 k8s.io/kubernetes/test/e2e/framework.dumpEventsInNamespace(0xc002d39590, {0xc000b8a870, 0xe}) test/e2e/framework/util.go:901 +0x191 k8s.io/kubernetes/test/e2e/framework.DumpAllNamespaceInfo({0x7ca2818, 0xc000bc6d80}, {0xc000b8a870, 0xe}) test/e2e/framework/util.go:919 +0x8d k8s.io/kubernetes/test/e2e/framework.NewFramework.func1(0xc000c63e40, 0x1?) test/e2e/framework/framework.go:181 +0x8b k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc000c63e40) test/e2e/framework/framework.go:435 +0x21d
Filter through log files | View test history on testgrid
exit status 255
from junit_runner.xml
Filter through log files | View test history on testgrid
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)] volumes should store data
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with different fsgroup applied to the volume contents
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with same fsgroup applied to the volume contents
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with different fsgroup applied to the volume contents
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with same fsgroup skips ownership changes to the volume contents
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] provisioning should mount multiple PV pointing to the same storage on the same node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with mount options
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] volumes should store data
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (delayed binding)] topology should provision a volume and schedule a pod with AllowedTopologies
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ext4)] volumes should store data
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits should verify that all csinodes have volume limits
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (immediate binding)] topology should provision a volume and schedule a pod with AllowedTopologies
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should create read/write inline ephemeral volume
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should support expansion of pvcs created for ephemeral pvcs
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should support multiple inline ephemeral volumes
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should support two pods which have the same volume definition
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should create read-only inline ephemeral volume
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should create read/write inline ephemeral volume
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should support expansion of pvcs created for ephemeral pvcs
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should support two pods which have the same volume definition
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should create read-only inline ephemeral volume
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should create read/write inline ephemeral volume
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support expansion of pvcs created for ephemeral pvcs
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support multiple inline ephemeral volumes
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support two pods which have the same volume definition
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Generic Ephemeral-volume (default fs)] volumeLimits should verify that all csinodes have volume limits
Kubernetes e2e suite [It] [sig-api-machinery] API priority and fairness should ensure that requests can be classified by adding FlowSchema and PriorityLevelConfiguration
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceValidationRules [Privileged:ClusterAdmin] MUST NOT fail validation for create of a custom resource that satisfies the x-kubernetes-validations rules
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceValidationRules [Privileged:ClusterAdmin] MUST fail create of a custom resource definition that contains a x-kubernetes-validations rule that refers to a property that do not exist
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceValidationRules [Privileged:ClusterAdmin] MUST fail create of a custom resource definition that contains an x-kubernetes-validations rule that contains a syntax error
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceValidationRules [Privileged:ClusterAdmin] MUST fail create of a custom resource definition that contains an x-kubernetes-validations rule that exceeds the estimated cost limit
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceValidationRules [Privileged:ClusterAdmin] MUST fail create of a custom resource that exceeds the runtime cost limit for x-kubernetes-validations rule execution
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceValidationRules [Privileged:ClusterAdmin] MUST fail update of a custom resource that does not satisfy a x-kubernetes-validations transition rule
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourceValidationRules [Privileged:ClusterAdmin] MUST fail validation for create of a custom resource that does not satisfy the x-kubernetes-validations rules
Kubernetes e2e suite [It] [sig-api-machinery] Discovery Custom resource should have storage version hash
Kubernetes e2e suite [It] [sig-api-machinery] Discovery should accurately determine present and missing resources
Kubernetes e2e suite [It] [sig-api-machinery] Discovery should validate PreferredVersion for each APIGroup [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Garbage collector should orphan pods created by rc if deleteOptions.OrphanDependents is nil
Kubernetes e2e suite [It] [sig-api-machinery] Garbage collector should support cascading deletion of custom resources
Kubernetes e2e suite [It] [sig-api-machinery] Garbage collector should support orphan deletion of custom resources
Kubernetes e2e suite [It] [sig-api-machinery] Generated clientset should create pods, set the deletionTimestamp and deletionGracePeriodSeconds of the pod
Kubernetes e2e suite [It] [sig-api-machinery] Generated clientset should create v1 cronJobs, delete cronJobs, watch cronJobs
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a custom resource.
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a persistent volume claim
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a persistent volume claim with a storage class
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should manage the lifecycle of a ResourceQuota [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should verify ResourceQuota with cross namespace pod affinity scope using scope-selectors.
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Server request timeout default timeout should be used if the specified timeout in the request URL is 0s
Kubernetes e2e suite [It] [sig-api-machinery] Server request timeout should return HTTP status code 400 if the user specifies an invalid timeout in the request URL
Kubernetes e2e suite [It] [sig-api-machinery] Server request timeout the request should be served with a default timeout if the specified timeout in the request URL exceeds maximum allowed
Kubernetes e2e suite [It] [sig-api-machinery] ServerSideApply should create an applied object if it does not already exist
Kubernetes e2e suite [It] [sig-api-machinery] ServerSideApply should give up ownership of a field if forced applied by a controller
Kubernetes e2e suite [It] [sig-api-machinery] ServerSideApply should ignore conflict errors if force apply is used
Kubernetes e2e suite [It] [sig-api-machinery] ServerSideApply should not remove a field if an owner unsets the field but other managers still have ownership of the field
Kubernetes e2e suite [It] [sig-api-machinery] ServerSideApply should remove a field if it is owned but removed in the apply request
Kubernetes e2e suite [It] [sig-api-machinery] ServerSideApply should work for CRDs
Kubernetes e2e suite [It] [sig-api-machinery] ServerSideApply should work for subresources
Kubernetes e2e suite [It] [sig-api-machinery] Servers with support for API chunking should return chunks of results for list calls
Kubernetes e2e suite [It] [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Servers with support for Table transformation should return chunks of table results for list calls
Kubernetes e2e suite [It] [sig-api-machinery] Servers with support for Table transformation should return generic metadata details across all namespaces for nodes
Kubernetes e2e suite [It] [sig-api-machinery] Servers with support for Table transformation should return pod details
Kubernetes e2e suite [It] [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] client-go should negotiate watch and report errors with accept "application/json"
Kubernetes e2e suite [It] [sig-api-machinery] client-go should negotiate watch and report errors with accept "application/json,application/vnd.kubernetes.protobuf"
Kubernetes e2e suite [It] [sig-api-machinery] client-go should negotiate watch and report errors with accept "application/vnd.kubernetes.protobuf"
Kubernetes e2e suite [It] [sig-api-machinery] client-go should negotiate watch and report errors with accept "application/vnd.kubernetes.protobuf,application/json"
Kubernetes e2e suite [It] [sig-api-machinery] health handlers should contain necessary checks
Kubernetes e2e suite [It] [sig-api-machinery] server version should find the server version [Conformance]
Kubernetes e2e suite [It] [sig-apps] CronJob should be able to schedule after more than 100 missed schedule
Kubernetes e2e suite [It] [sig-apps] CronJob should delete failed finished jobs with limit of one job
Kubernetes e2e suite [It] [sig-apps] CronJob should delete successful finished jobs with limit of one successful job
Kubernetes e2e suite [It] [sig-apps] CronJob should remove from active list jobs that have been deleted
Kubernetes e2e suite [It] [sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]
Kubernetes e2e suite [It] [sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]
Kubernetes e2e suite [It] [sig-apps] CronJob should support CronJob API operations [Conformance]
Kubernetes e2e suite [It] [sig-apps] CronJob should support timezone
Kubernetes e2e suite [It] [sig-apps] Deployment Deployment should have a working scale subresource [Conformance]
Kubernetes e2e suite [It] [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]
Kubernetes e2e suite [It] [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]
Kubernetes e2e suite [It] [sig-apps] Deployment deployment reaping should cascade to its replica sets and pods
Kubernetes e2e suite [It] [sig-apps] Deployment deployment should delete old replica sets [Conformance]
Kubernetes e2e suite [It] [sig-apps] Deployment deployment should support proportional scaling [Conformance]
Kubernetes e2e suite [It] [sig-apps] Deployment deployment should support rollover [Conformance]
Kubernetes e2e suite [It] [sig-apps] Deployment iterative rollouts should eventually progress
Kubernetes e2e suite [It] [sig-apps] Deployment should not disrupt a cloud load-balancer's connectivity during rollout
Kubernetes e2e suite [It] [sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]
Kubernetes e2e suite [It] [sig-apps] Deployment should validate Deployment Status endpoints [Conformance]
Kubernetes e2e suite [It] [sig-apps] Deployment test Deployment ReplicaSet orphaning and adoption regarding controllerRef
Kubernetes e2e suite [It] [sig-apps] DisruptionController Listing PodDisruptionBudgets for all namespaces should list and delete a collection of PodDisruptionBudgets [Conformance]
Kubernetes e2e suite [It] [sig-apps] DisruptionController evictions: enough pods, absolute => should allow an eviction
Kubernetes e2e suite [It] [sig-apps] DisruptionController evictions: enough pods, replicaSet, percentage => should allow an eviction
Kubernetes e2e suite [It] [sig-apps] DisruptionController evictions: maxUnavailable allow single eviction, percentage => should allow an eviction
Kubernetes e2e suite [It] [sig-apps] DisruptionController evictions: no PDB => should allow an eviction
Kubernetes e2e suite [It] [sig-apps] DisruptionController evictions: too few pods, absolute => should not allow an eviction
Kubernetes e2e suite [It] [sig-apps] DisruptionController should block an eviction until the PDB is updated to allow it [Conformance]
Kubernetes e2e suite [It] [sig-apps] DisruptionController should create a PodDisruptionBudget [Conformance]
Kubernetes e2e suite [It] [sig-apps] DisruptionController should observe PodDisruptionBudget status updated [Conformance]
Kubernetes e2e suite [It] [sig-apps] DisruptionController should observe that the PodDisruptionBudget status is not updated for unmanaged pods
Kubernetes e2e suite [It] [sig-apps] DisruptionController should update/patch PodDisruptionBudget status [Conformance]
Kubernetes e2e suite [It] [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]
Kubernetes e2e suite [It] [sig-apps] Job should apply changes to a job status [Conformance]
Kubernetes e2e suite [It] [sig-apps] Job should create pods for an Indexed job with completion indexes and specified hostname [Conformance]
Kubernetes e2e suite [It] [sig-apps] Job should delete a job [Conformance]
Kubernetes e2e suite [It] [sig-apps] Job should delete pods when suspended
Kubernetes e2e suite [It] [sig-apps] Job should fail to exceed backoffLimit
Kubernetes e2e suite [It] [sig-apps] Job should fail when exceeds active deadline
Kubernetes e2e suite [It] [sig-apps] Job should manage the lifecycle of a job [Conformance]
Kubernetes e2e suite [It] [sig-apps] Job should not create pods when created in suspend state
Kubernetes e2e suite [It] [sig-apps] Job should remove pods when job is deleted
Kubernetes e2e suite [It] [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
Kubernetes e2e suite [It] [sig-apps] Job should run a job to completion when tasks sometimes fail and are not locally restarted
Kubernetes e2e suite [It] [sig-apps] ReplicaSet Replace and Patch tests [Conformance]
Kubernetes e2e suite [It] [sig-apps] ReplicaSet Replicaset should have a working scale subresource [Conformance]
Kubernetes e2e suite [It] [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]
Kubernetes e2e suite [It] [sig-apps] ReplicaSet should list and delete a collection of ReplicaSets [Conformance]
Kubernetes e2e suite [It] [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]
Kubernetes e2e suite [It] [sig-apps] ReplicaSet should surface a failure condition on a common issue like exceeded quota
Kubernetes e2e suite [It] [sig-apps] ReplicaSet should validate Replicaset Status endpoints [Conformance]
Kubernetes e2e suite [It] [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]
Kubernetes e2e suite [It] [sig-apps] ReplicationController should release no longer matching pods [Conformance]
Kubernetes e2e suite [It] [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]
Kubernetes e2e suite [It] [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]
Kubernetes e2e suite [It] [sig-apps] ReplicationController should test the lifecycle of a ReplicationController [Conformance]
Kubernetes e2e suite [It] [sig-apps] StatefulSet AvailableReplicas should get updated accordingly when MinReadySeconds is enabled
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should adopt matching orphans and release non-matching pods
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should implement legacy replacement when the update strategy is OnDelete
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should not deadlock when a pod's predecessor fails
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications with PVCs
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should provide basic identity
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should validate Statefulset Status endpoints [Conformance]
Kubernetes e2e suite [It] [sig-apps] StatefulSet MinReadySeconds should be honored when enabled
Kubernetes e2e suite [It] [sig-apps] TTLAfterFinished job should be deleted once it finishes after TTL seconds
Kubernetes e2e suite [It] [sig-architecture] Conformance Tests should have at least two untainted nodes [Conformance]
Kubernetes e2e suite [It] [sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]
Kubernetes e2e suite [It] [sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR
Kubernetes e2e suite [It] [sig-auth] ServiceAccounts ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance]
Kubernetes e2e suite [It] [sig-auth] ServiceAccounts no secret-based service account token should be auto-generated
Kubernetes e2e suite [It] [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]
Kubernetes e2e suite [It] [sig-auth] ServiceAccounts should guarantee kube-root-ca.crt exist in any namespace [Conformance]
Kubernetes e2e suite [It] [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]
Kubernetes e2e suite [It] [sig-auth] ServiceAccounts should mount projected service account token [Conformance]
Kubernetes e2e suite [It] [sig-auth] ServiceAccounts should run through the lifecycle of a ServiceAccount [Conformance]
Kubernetes e2e suite [It] [sig-auth] ServiceAccounts should set ownership and permission when RunAsUser or FsGroup is present [LinuxOnly] [NodeFeature:FSGroup]
Kubernetes e2e suite [It] [sig-cli] Kubectl Port forwarding With a server listening on 0.0.0.0 should support forwarding over websockets
Kubernetes e2e suite [It] [sig-cli] Kubectl Port forwarding With a server listening on 0.0.0.0 that expects NO client request should support a client that connects, sends DATA, and disconnects
Kubernetes e2e suite [It] [sig-cli] Kubectl Port forwarding With a server listening on 0.0.0.0 that expects a client request should support a client that connects, sends DATA, and disconnects
Kubernetes e2e suite [It] [sig-cli] Kubectl Port forwarding With a server listening on 0.0.0.0 that expects a client request should support a client that connects, sends NO DATA, and disconnects
Kubernetes e2e suite [It] [sig-cli] Kubectl Port forwarding With a server listening on localhost should support forwarding over websockets
Kubernetes e2e suite [It] [sig-cli] Kubectl Port forwarding With a server listening on localhost that expects NO client request should support a client that connects, sends DATA, and disconnects
Kubernetes e2e suite [It] [sig-cli] Kubectl Port forwarding With a server listening on localhost that expects a client request should support a client that connects, sends DATA, and disconnects
Kubernetes e2e suite [It] [sig-cli] Kubectl Port forwarding With a server listening on localhost that expects a client request should support a client that connects, sends NO DATA, and disconnects
Kubernetes e2e suite [It] [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl apply apply set/view last-applied
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl apply should apply a new configuration to an existing RC
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl apply should reuse port when apply to an existing SVC
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl cluster-info dump should check if cluster-info dump succeeds
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes control plane services is included in cluster-info [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl copy should copy a file from a running Pod
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl create quota should create a quota with scopes
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl create quota should create a quota without scopes
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl create quota should reject quota with invalid scopes
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for cronjob
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl get componentstatuses should get componentstatuses
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl validation should create/apply a CR with unknown fields for CRD with no validation schema
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl validation should create/apply a valid CR for CRD with validation schema
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl validation should create/apply an invalid/valid CR with arbitrary-extra properties for CRD with partially-specified validation schema
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl validation should detect unknown metadata fields in both the root and embedded object of a CR
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl validation should detect unknown metadata fields of a typed object
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Simple pod should contain last line of the log
Kubernetes e2e suite [It] [sig-cli] Kubectl client Simple pod should handle in-cluster config
Kubernetes e2e suite [It] [sig-cli] Kubectl client Simple pod should return command exit codes execing into a container with a failing command
Kubernetes e2e suite [It] [sig-cli] Kubectl client Simple pod should return command exit codes execing into a container with a successful command
Kubernetes e2e suite [It] [sig-cli] Kubectl client Simple pod should return command exit codes running a failing command
Kubernetes e2e suite [It] [sig-cli] Kubectl client Simple pod should return command exit codes running a successful command
Kubernetes e2e suite [It] [sig-cli] Kubectl client Simple pod should support exec
Kubernetes e2e suite [It] [sig-cli] Kubectl client Simple pod should support exec through an HTTP proxy
Kubernetes e2e suite [It] [sig-cli] Kubectl client Simple pod should support exec through kubectl proxy
Kubernetes e2e suite [It] [sig-cli] Kubectl client Simple pod should support exec using resource/name
Kubernetes e2e suite [It] [sig-cli] Kubectl client Simple pod should support inline execution and attach
Kubernetes e2e suite [It] [sig-cli] Kubectl client Simple pod should support port-forward
Kubernetes e2e suite [It] [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]
Kubernetes e2e suite [It] [sig-cli] Kubectl client kubectl wait should ignore not found error with --for=delete
Kubernetes e2e suite [It] [sig-instrumentation] Events API should delete a collection of events [Conformance]
Kubernetes e2e suite [It] [sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]
Kubernetes e2e suite [It] [sig-instrumentation] Events should delete a collection of events [Conformance]
Kubernetes e2e suite [It] [sig-instrumentation] Events should manage the lifecycle of an event [Conformance]
Kubernetes e2e suite [It] [sig-instrumentation] MetricsGrabber should grab all metrics from API server.
Kubernetes e2e suite [It] [sig-instrumentation] MetricsGrabber should grab all metrics from a ControllerManager.
Kubernetes e2e suite [It] [sig-instrumentation] MetricsGrabber should grab all metrics from a Kubelet.
Kubernetes e2e suite [It] [sig-instrumentation] MetricsGrabber should grab all metrics from a Scheduler.
Kubernetes e2e suite [It] [sig-network] CVE-2021-29923 IPv4 Service Type ClusterIP with leading zeros should work interpreted as decimal
Kubernetes e2e suite [It] [sig-network] Conntrack should be able to preserve UDP traffic when initial unready endpoints get ready
Kubernetes e2e suite [It] [sig-network] Conntrack should be able to preserve UDP traffic when server pod cycles for a ClusterIP service
Kubernetes e2e suite [It] [sig-network] Conntrack should be able to preserve UDP traffic when server pod cycles for a NodePort service
Kubernetes e2e suite [It] [sig-network] Conntrack should drop INVALID conntrack entries [Privileged]
Kubernetes e2e suite [It] [sig-network] DNS should provide /etc/hosts entries for the cluster [Conformance]
Kubernetes e2e suite [It] [sig-network] DNS should provide DNS for ExternalName services [Conformance]
Kubernetes e2e suite [It] [sig-network] DNS should provide DNS for pods for Hostname [Conformance]
Kubernetes e2e suite [It] [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]
Kubernetes e2e suite [It] [sig-network] DNS should provide DNS for services [Conformance]
Kubernetes e2e suite [It] [sig-network] DNS should provide DNS for the cluster [Conformance]
Kubernetes e2e suite [It] [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
Kubernetes e2e suite [It] [sig-network] DNS should resolve DNS of partial qualified names for the cluster [LinuxOnly]
Kubernetes e2e suite [It] [sig-network] DNS should support configurable pod DNS nameservers [Conformance]
Kubernetes e2e suite [It] [sig-network] DNS should support configurable pod resolv.conf
Kubernetes e2e suite [It] [sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]
Kubernetes e2e suite [It] [sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]
Kubernetes e2e suite [It] [sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]
Kubernetes e2e suite [It] [sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance]
Kubernetes e2e suite [It] [sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]
Kubernetes e2e suite [It] [sig-network] HostPort validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance]
Kubernetes e2e suite [It] [sig-network] Ingress API should support creating Ingress API operations [Conformance]
Kubernetes e2e suite [It] [sig-network] IngressClass API should support creating IngressClass API operations [Conformance]
Kubernetes e2e suite [It] [sig-network] KubeProxy should set TCP CLOSE_WAIT timeout [Privileged]
Kubernetes e2e suite [It] [sig-network] Netpol API should support creating NetworkPolicy API operations
Kubernetes e2e suite [It] [sig-network] Netpol API should support creating NetworkPolicy API with endport field
Kubernetes e2e suite [It] [sig-network] NetworkPolicy API should support creating NetworkPolicy API operations
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-network] Networking should provide unchanging, static URL paths for kubernetes api services
Kubernetes e2e suite [It] [sig-network] Proxy version v1 A set of valid responses are returned for both pod and service Proxy [Conformance]
Kubernetes e2e suite [It] [sig-network] Proxy version v1 A set of valid responses are returned for both pod and service ProxyWithPath [Conformance]
Kubernetes e2e suite [It] [sig-network] Proxy version v1 should proxy logs on node using proxy subresource
Kubernetes e2e suite [It] [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource
Kubernetes e2e suite [It] [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance]
Kubernetes e2e suite [It] [sig-network] SCTP [LinuxOnly] should allow creating a basic SCTP service with pod and endpoints
Kubernetes e2e suite [It] [sig-network] SCTP [LinuxOnly] should create a ClusterIP Service with SCTP ports
Kubernetes e2e suite [It] [sig-network] SCTP [LinuxOnly] should create a Pod with SCTP HostPort
Kubernetes e2e suite [It] [sig-network] Service endpoints latency should not be very high [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should allow pods to hairpin back to themselves through services
Kubernetes e2e suite [It] [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should be able to connect to terminating and unready endpoints if PublishNotReadyAddresses is true
Kubernetes e2e suite [It] [sig-network] Services should be able to create a functioning NodePort service [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should be able to up and down services
Kubernetes e2e suite [It] [sig-network] Services should be able to update service type to NodePort listening on same port number but different protocols
Kubernetes e2e suite [It] [sig-network] Services should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node
Kubernetes e2e suite [It] [sig-network] Services should be updated after adding or deleting ports
Kubernetes e2e suite [It] [sig-network] Services should check NodePort out-of-range
Kubernetes e2e suite [It] [sig-network] Services should complete a service status lifecycle [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should create endpoints for unready pods
Kubernetes e2e suite [It] [sig-network] Services should delete a collection of services [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should find a service from listing all namespaces [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should implement service.kubernetes.io/headless
Kubernetes e2e suite [It] [sig-network] Services should implement service.kubernetes.io/service-proxy-name
Kubernetes e2e suite [It] [sig-network] Services should not be able to connect to terminating and unready endpoints if PublishNotReadyAddresses is false
Kubernetes e2e suite [It] [sig-network] Services should preserve source pod IP for traffic thru service cluster IP [LinuxOnly]
Kubernetes e2e suite [It] [sig-network] Services should prevent NodePort collisions
Kubernetes e2e suite [It] [sig-network] Services should provide secure master service [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should release NodePorts on delete
Kubernetes e2e suite [It] [sig-network] Services should serve a basic endpoint from pods [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should serve multiport endpoints from pods [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should test the lifecycle of an Endpoint [Conformance]
Kubernetes e2e suite [It] [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]
Kubernetes e2e suite [It] [sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]
Kubernetes e2e suite [It] [sig-node] ConfigMap should update ConfigMap successfully
Kubernetes e2e suite [It] [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Container Runtime blackbox test on terminated container should report termination message as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Container Runtime blackbox test on terminated container should report termination message from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Container Runtime blackbox test on terminated container should report termination message from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Container Runtime blackbox test on terminated container should report termination message if TerminationMessagePath is set [NodeConformance]
Kubernetes e2e suite [It] [sig-node] Container Runtime blackbox test on terminated container should report termination message if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Container Runtime blackbox test when running a container with a new image should be able to pull from private registry with secret [NodeConformance]
Kubernetes e2e suite [It] [sig-node] Container Runtime blackbox test when running a container with a new image should be able to pull image [NodeConformance]
Kubernetes e2e suite [It] [sig-node] Container Runtime blackbox test when running a container with a new image should not be able to pull from private registry without secret [NodeConformance]
Kubernetes e2e suite [It] [sig-node] Container Runtime blackbox test when running a container with a new image should not be able to pull image from invalid registry [NodeConformance]
Kubernetes e2e suite [It] [sig-node] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Containers should be able to override the image's default arguments (container cmd) [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Containers should be able to override the image's default command (container entrypoint) [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Downward API should provide host IP and pod IP as an env var if pod uses host network [LinuxOnly]
Kubernetes e2e suite [It] [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Ephemeral Containers [NodeConformance] will start an ephemeral container in an existing pod [Conformance]
Kubernetes e2e suite [It] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running
Kubernetes e2e suite [It] [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]
Kubernetes e2e suite [It] [sig-node] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
Kubernetes e2e suite [It] [sig-node] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
Kubernetes e2e suite [It] [sig-node] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Kubelet when scheduling an agnhost Pod with hostAliases should write entries to /etc/hosts [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Lease lease API should be available [Conformance]
Kubernetes e2e suite [It] [sig-node] Mount propagation should propagate mounts within defined scopes
Kubernetes e2e suite [It] [sig-node] NodeLease NodeLease should have OwnerReferences set
Kubernetes e2e suite [It] [sig-node] NodeLease NodeLease the kubelet should create and update a lease in the kube-node-lease namespace
Kubernetes e2e suite [It] [sig-node] NodeLease NodeLease the kubelet should report node status infrequently
Kubernetes e2e suite [It] [sig-node] PodOSRejection [NodeConformance] Kubelet should reject pod when the node OS doesn't match pod's OS
Kubernetes e2e suite [It] [sig-node] PodTemplates should delete a collection of pod templates [Conformance]
Kubernetes e2e suite [It] [sig-node] PodTemplates should replace a pod template [Conformance]
Kubernetes e2e suite [It] [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance]
Kubernetes e2e suite [It] [sig-node] Pods Extended Delete Grace Period should be submitted and removed
Kubernetes e2e suite [It] [sig-node] Pods Extended Pod Container Status should never report container start when an init container fails
Kubernetes e2e suite [It] [sig-node] Pods Extended Pod Container Status should never report success for a pending container
Kubernetes e2e suite [It] [sig-node] Pods Extended Pod Container lifecycle evicted pods should be terminal
Kubernetes e2e suite [It] [sig-node] Pods Extended Pod Container lifecycle should not create extra sandbox if all containers are done
Kubernetes e2e suite [It] [sig-node] Pods Extended Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]
Kubernetes e2e suite [It] [sig-node] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Pods should be submitted and removed [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Pods should be updated [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Pods should contain environment variables for services [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Pods should delete a collection of pods [Conformance]
Kubernetes e2e suite [It] [sig-node] Pods should get a host IP [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Pods should patch a pod status [Conformance]
Kubernetes e2e suite [It] [sig-node] Pods should run through the lifecycle of Pods and PodStatus [Conformance]
Kubernetes e2e suite [It] [sig-node] Pods should support pod readiness gates [NodeConformance]
Kubernetes e2e suite [It] [sig-node] Pods should support remote command execution over websockets [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] PreStop graceful pod terminated should wait until preStop hook completes the process
Kubernetes e2e suite [It] [sig-node] PreStop should call prestop when killing a pod [Conformance]
Kubernetes e2e suite [It] [sig-node] PrivilegedPod [NodeConformance] should enable privileged commands [LinuxOnly]
Kubernetes e2e suite [It] [sig-node] Probing container should *not* be restarted by liveness probe because startup probe delays it
Kubernetes e2e suite [It] [sig-node] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Probing container should *not* be restarted with a GRPC liveness probe [NodeConformance]
Kubernetes e2e suite [It] [sig-node] Probing container should *not* be restarted with a non-local redirect http liveness probe
Kubernetes e2e suite [It] [sig-node] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Probing container should be ready immediately after startupProbe succeeds
Kubernetes e2e suite [It] [sig-node] Probing container should be restarted by liveness probe after startup probe enables it
Kubernetes e2e suite [It] [sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Probing container should be restarted with a GRPC liveness probe [NodeConformance]
Kubernetes e2e suite [It] [sig-node] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Probing container should be restarted with a failing exec liveness probe that took longer than the timeout
Kubernetes e2e suite [It] [sig-node] Probing container should be restarted with a local redirect http liveness probe
Kubernetes e2e suite [It] [sig-node] Probing container should be restarted with an exec liveness probe with timeout [MinimumKubeletVersion:1.20] [NodeConformance]
Kubernetes e2e suite [It] [sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Probing container should mark readiness on pods to false and disable liveness probes while pod is in progress of terminating
Kubernetes e2e suite [It] [sig-node] Probing container should mark readiness on pods to false while pod is in progress of terminating when a pod has a readiness probe
Kubernetes e2e suite [It] [sig-node] Probing container should not be ready with an exec readiness probe timeout [MinimumKubeletVersion:1.20] [NodeConformance]
Kubernetes e2e suite [It] [sig-node] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] RuntimeClass should support RuntimeClasses API operations [Conformance]
Kubernetes e2e suite [It] [sig-node] RuntimeClass should reject a Pod requesting a RuntimeClass with an unconfigured handler [NodeFeature:RuntimeHandler]
Kubernetes e2e suite [It] [sig-node] RuntimeClass should reject a Pod requesting a RuntimeClass with conflicting node selector
Kubernetes e2e suite [It] [sig-node] RuntimeClass should reject a Pod requesting a deleted RuntimeClass [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] RuntimeClass should reject a Pod requesting a non-existent RuntimeClass [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] RuntimeClass should schedule a Pod requesting a RuntimeClass and initialize its Overhead [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] RuntimeClass should schedule a Pod requesting a RuntimeClass without PodOverhead [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] SSH should SSH to all nodes and run commands
Kubernetes e2e suite [It] [sig-node] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Secrets should be consumable via the environment [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Secrets should fail to create secret due to empty secret key [Conformance]
Kubernetes e2e suite [It] [sig-node] Secrets should patch a secret [Conformance]
Kubernetes e2e suite [It] [sig-node] Security Context When creating a container with runAsNonRoot should not run with an explicit root user ID [LinuxOnly]
Kubernetes e2e suite [It] [sig-node] Security Context When creating a container with runAsNonRoot should not run without a specified user ID
Kubernetes e2e suite [It] [sig-node] Security Context When creating a container with runAsNonRoot should run with an explicit non-root user ID [LinuxOnly]
Kubernetes e2e suite [It] [sig-node] Security Context When creating a container with runAsNonRoot should run with an image specified user ID
Kubernetes e2e suite [It] [sig-node] Security Context When creating a container with runAsUser should run the container with uid 0 [LinuxOnly] [NodeConformance]
Kubernetes e2e suite [It] [sig-node] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Security Context When creating a pod with privileged should run the container as privileged when true [LinuxOnly] [NodeFeature:HostAccess]
Kubernetes e2e suite [It] [sig-node] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Security Context When creating a pod with readOnlyRootFilesystem should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance]
Kubernetes e2e suite [It] [sig-node] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Security Context should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]
Kubernetes e2e suite [It] [sig-node] Security Context should support container.SecurityContext.RunAsUser [LinuxOnly]
Kubernetes e2e suite [It] [sig-node] Security Context should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]
Kubernetes e2e suite [It] [sig-node] Security Context should support pod.Spec.SecurityContext.RunAsUser [LinuxOnly]
Kubernetes e2e suite [It] [sig-node] Security Context should support pod.Spec.SecurityContext.SupplementalGroups [LinuxOnly]
Kubernetes e2e suite [It] [sig-node] Security Context should support seccomp default which is unconfined [LinuxOnly]
Kubernetes e2e suite [It] [sig-node] Security Context should support seccomp runtime/default [LinuxOnly]
Kubernetes e2e suite [It] [sig-node] Security Context should support seccomp unconfined on the container [LinuxOnly]
Kubernetes e2e suite [It] [sig-node] Security Context should support seccomp unconfined on the pod [LinuxOnly]
Kubernetes e2e suite [It] [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance]
Kubernetes e2e suite [It] [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when true [LinuxOnly] [NodeConformance]
Kubernetes e2e suite [It] [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Sysctls [LinuxOnly] [NodeConformance] should not launch unsafe, but not explicitly enabled sysctls on the node [MinimumKubeletVersion:1.21]
Kubernetes e2e suite [It] [sig-node] Sysctls [LinuxOnly] [NodeConformance] should reject invalid sysctls [MinimumKubeletVersion:1.21] [Conformance]
Kubernetes e2e suite [It] [sig-node] Sysctls [LinuxOnly] [NodeConformance] should support sysctls [MinimumKubeletVersion:1.21] [Conformance]
Kubernetes e2e suite [It] [sig-node] Sysctls [LinuxOnly] [NodeConformance] should support sysctls with slashes as separator [MinimumKubeletVersion:1.23]
Kubernetes e2e suite [It] [sig-node] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Variable Expansion should allow substituting values in a volume subpath [Conformance]
Kubernetes e2e suite [It] [sig-node] kubelet Clean up pods on node kubelet should be able to delete 10 pods per node in 1m0s.
Kubernetes e2e suite [It] [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should create read-only inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should create read/write inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should support multiple inline ephemeral volumes
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should support two pods which have the same volume definition
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with pvc data source
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] provisioning should mount multiple PV pointing to the same storage on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits should verify that all csinodes have volume limits
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should create read/write inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should support expansion of pvcs created for ephemeral pvcs
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should support multiple inline ephemeral volumes
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should support two pods which have the same volume definition
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should create read-only inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should create read/write inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should support expansion of pvcs created for ephemeral pvcs
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should support two pods which have the same volume definition
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should create read-only inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should create read/write inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support expansion of pvcs created for ephemeral pvcs
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support multiple inline ephemeral volumes
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support two pods which have the same volume definition
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs)] volumeLimits should verify that all csinodes have volume limits
Kubernetes e2e suite [It] [sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should modify fsGroup if fsGroupPolicy=File
Kubernetes e2e suite [It] [sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should modify fsGroup if fsGroupPolicy=default
Kubernetes e2e suite [It] [sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should not modify fsGroup if fsGroupPolicy=None
Kubernetes e2e suite [It] [sig-storage] CSI mock volume CSI Volume expansion should expand volume by restarting pod if attach=off, nodeExpansion=on
Kubernetes e2e suite [It] [sig-storage] CSI mock volume CSI Volume expansion should expand volume by restarting pod if attach=on, nodeExpansion=on
Kubernetes e2e suite [It] [sig-storage] CSI mock volume CSI Volume expansion should expand volume without restarting pod if nodeExpansion=off
Kubernetes e2e suite [It] [sig-storage] CSI mock volume CSI Volume expansion should not expand volume if resizingOnDriver=off, resizingOnSC=on
Kubernetes e2e suite [It] [sig-storage] CSI mock volume CSI attach test using mock driver should not require VolumeAttach for drivers without attachment
Kubernetes e2e suite [It] [sig-storage] CSI mock volume CSI attach test using mock driver should preserve attachment policy when no CSIDriver present
Kubernetes e2e suite [It] [sig-storage] CSI mock volume CSI attach test using mock driver should require VolumeAttach for drivers with attachment
Kubernetes e2e suite [It] [sig-storage] CSI mock volume CSI attach test using mock driver should require VolumeAttach for ephemermal volume and drivers with attachment
Kubernetes e2e suite [It] [sig-storage] CSI mock volume CSI online volume expansion should expand volume without restarting pod if attach=off, nodeExpansion=on
Kubernetes e2e suite [It] [sig-storage] CSI mock volume CSI online volume expansion should expand volume without restarting pod if attach=on, nodeExpansion=on
Kubernetes e2e suite [It] [sig-storage] CSI mock volume CSI workload information using mock driver contain ephemeral=true when using inline volume
Kubernetes e2e suite [It] [sig-storage] CSI mock volume CSI workload information using mock driver should be passed when podInfoOnMount=true
Kubernetes e2e suite [It] [sig-storage] CSI mock volume CSI workload information using mock driver should not be passed when CSIDriver does not exist
Kubernetes e2e suite [It] [sig-storage] CSI mock volume CSI workload information using mock driver should not be passed when podInfoOnMount=false
Kubernetes e2e suite [It] [sig-storage] CSI mock volume CSI workload information using mock driver should not be passed when podInfoOnMount=nil
Kubernetes e2e suite [It] [sig-storage] CSI mock volume CSIServiceAccountToken token should be plumbed down when csiServiceAccountTokenEnabled=true
Kubernetes e2e suite [It] [sig-storage] CSI mock volume CSIServiceAccountToken token should not be plumbed down when CSIDriver is not deployed
Kubernetes e2e suite [It] [sig-storage] CSI mock volume CSIServiceAccountToken token should not be plumbed down when csiServiceAccountTokenEnabled=false
Kubernetes e2e suite [It] [sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity disabled
Kubernetes e2e suite [It] [sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity unused
Kubernetes e2e suite [It] [sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, have capacity
Kubernetes e2e suite [It] [sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, insufficient capacity
Kubernetes e2e suite [It] [sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, no capacity
Kubernetes e2e suite [It] [sig-storage] CSI mock volume Delegate FSGroup to CSI driver [LinuxOnly] should not pass FSGroup to CSI driver if it is set in pod and driver supports VOLUME_MOUNT_GROUP
Kubernetes e2e suite [It] [sig-storage] CSI mock volume Delegate FSGroup to CSI driver [LinuxOnly] should pass FSGroup to CSI driver if it is set in pod and driver supports VOLUME_MOUNT_GROUP
Kubernetes e2e suite [It] [sig-storage] CSI mock volume storage capacity exhausted, immediate binding
Kubernetes e2e suite [It] [sig-storage] CSI mock volume storage capacity exhausted, late binding, no topology
Kubernetes e2e suite [It] [sig-storage] CSI mock volume storage capacity exhausted, late binding, with topology
Kubernetes e2e suite [It] [sig-storage] CSI mock volume storage capacity unlimited
Kubernetes e2e suite [It] [sig-storage] CSIInlineVolumes should support CSIVolumeSource in Pod API
Kubernetes e2e suite [It] [sig-storage] CSIInlineVolumes should support ephemeral VolumeLifecycleMode in CSIDriver API
Kubernetes e2e suite [It] [sig-storage] CSIStorageCapacity should support CSIStorageCapacities API operations [Conformance]
Kubernetes e2e suite [It] [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] ConfigMap should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
Kubernetes e2e suite [It] [sig-storage] ConfigMap should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup]
Kubernetes e2e suite [It] [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
Kubernetes e2e suite [It] [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]
Kubernetes e2e suite [It] [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Downward API volume should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]
Kubernetes e2e suite [It] [sig-storage] Downward API volume should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup]
Kubernetes e2e suite [It] [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Dynamic Provisioning Invalid AWS KMS key should report an error and create no PV
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes pod should support memory backed volumes of specified size
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] files with FSGroup ownership should support (root,0644,tmpfs)
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] new files should be created with FSGroup ownership when container is non-root
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] new files should be created with FSGroup ownership when container is root
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] nonexistent volume subPath should have the correct mode and owner using FSGroup
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] volume on default medium should have the correct mode using FSGroup
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] volume on tmpfs should have the correct mode using FSGroup
Kubernetes e2e suite [It] [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]
Kubernetes e2e suite [It] [sig-storage] Ephemeralstorage When pod refers to non-existent ephemeral storage should allow deletion of pod with invalid volume : configmap
Kubernetes e2e suite [It] [sig-storage] Ephemeralstorage When pod refers to non-existent ephemeral storage should allow deletion of pod with invalid volume : projected
Kubernetes e2e suite [It] [sig-storage] Ephemeralstorage When pod refers to non-existent ephemeral storage should allow deletion of pod with invalid volume : secret
Kubernetes e2e suite [It] [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance]
Kubernetes e2e suite [It] [sig-storage] HostPath should support r/w [NodeConformance]
Kubernetes e2e suite [It] [sig-storage] HostPath should support subPath [NodeConformance]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support existing directory
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support existing directory
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (block volmode)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (ext4)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] PV Protection Verify "immediate" deletion of a PV that is not bound to a PVC
Kubernetes e2e suite [It] [sig-storage] PV Protection Verify that PV bound to a PVC is not removed immediately
Kubernetes e2e suite [It] [sig-storage] PVC Protection Verify "immediate" deletion of a PVC that is not in active use by a pod
Kubernetes e2e suite [It] [sig-storage] PVC Protection Verify that PVC in active use by a pod is not removed immediately
Kubernetes e2e suite [It] [sig-storage] PVC Protection Verify that scheduling of a pod that uses PVC that is being deleted fails and the pod becomes Unschedulable
Kubernetes e2e suite [It] [sig-storage] PersistentVolumes-expansion loopback local block volume should support online expansion on node
Kubernetes e2e suite [It] [sig-storage] PersistentVolumes-local Pod with node different from PV's NodeAffinity should fail scheduling due to different NodeAffinity
Kubernetes e2e suite [It] [sig-storage] PersistentVolumes-local Pod with node different from PV's NodeAffinity should fail scheduling due to different NodeSelector
Kubernetes e2e suite [It] [sig-storage] PersistentVolumes-local [Volume type: block] One pod requesting one prebound PVC should be able to mount volume and read from pod1
Kubernetes e2e suite [It] [sig-storage] PersistentVolumes-local [Volume type: block] One pod requesting one prebound PVC should be able to mount volume and write from pod1
Kubernetes e2e suite [It] [sig-storage] PersistentVolumes-local [Volume type: block] One pod requesting one prebound PVC should be able to mount volume and write from pod1
Kubernetes e2e suite [It] [sig-storage] PersistentVolumes-local [Volume type: block] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2
Kubernetes e2e suite [It] [sig-storage] PersistentVolumes-local [Volume type: block] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2
Kubernetes e2e suite [It] [sig-storage] PersistentVolumes-local [Volume type: block] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2
Kubernetes e2e suite [It] [sig-storage] PersistentVolumes-local [Volume type: block] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2
Kubernetes e2e suite [It] [sig-storage] PersistentVolumes-local [Volume type: blockfswithformat] One pod requesting one prebound PVC should be able to mount volume and read from pod1
Kubernetes e2e suite [It] [sig-storage] PersistentVolumes-local [Volume type: blockfswithformat] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2
Kubernetes e2e suite [It] [sig-storage] PersistentVolumes-local [Volume type: blockfswithformat] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2
Kubernetes e2e suite [It] [sig-storage] PersistentVolumes-local [Volume type: blockfswithformat] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2
Kubernetes e2e suite [It] [sig-storage] PersistentVolumes-local [Volume type: blockfswithoutformat] One pod requesting one prebound PVC should be able to mount volume and read from pod1
Kubernetes e2e suite [It] [sig-storage] PersistentVolumes-local [Volume type: blockfswithoutformat] One pod requesting one prebound PVC should be able to mount volume and write from pod1
Kubernetes e2e suite [It] [sig-storage] PersistentVolumes-local [Volume type: blockfswithoutformat] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2
Kubernetes e2e suite [It] [sig-storage] PersistentVolumes-local [Volume type: blockfswithoutformat] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2
Kubernetes e2e suite [It] [sig-storage] PersistentVolumes-local [Volume type: blockfswithoutformat] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2
Kubernetes e2e suite [It] [sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2
Kubernetes e2e suite [It] [sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2
Kubernetes e2e suite [It] [sig-storage] PersistentVolumes-local [Volume type: dir-link-bindmounted] One pod requesting one prebound PVC should be able to mount volume and write from pod1
Kubernetes e2e suite [It] [sig-storage] PersistentVolumes-local [Volume type: dir-link-bindmounted] One pod requesting one prebound PVC should be able to mount volume and write from pod1
Kubernetes e2e suite [It] [sig-storage] PersistentVolumes-local [Volume type: dir-link] One pod requesting one prebound PVC should be able to mount volume and read from pod1
Kubernetes e2e suite [It] [sig-storage] PersistentVolumes-local [Volume type: dir-link] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2
Kubernetes e2e suite [It] [sig-storage] PersistentVolumes-local [Volume type: dir] One pod requesting one prebound PVC should be able to mount volume and read from pod1
Kubernetes e2e suite [It] [sig-storage] PersistentVolumes-local [Volume type: dir] One pod requesting one prebound PVC should be able to mount volume and read from pod1
Kubernetes e2e suite [It] [sig-storage] PersistentVolumes-local [Volume type: dir] One pod requesting one prebound PVC should be able to mount volume and write from pod1
Kubernetes e2e suite [It] [sig-storage] PersistentVolumes-local [Volume type: dir] One pod requesting one prebound PVC should be able to mount volume and write from pod1
Kubernetes e2e suite [It] [sig-storage] PersistentVolumes-local [Volume type: tmpfs] One pod requesting one prebound PVC should be able to mount volume and write from pod1
Kubernetes e2e suite [It] [sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2
Kubernetes e2e suite [It] [sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2
Kubernetes e2e suite [It] [sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2
Kubernetes e2e suite [It] [sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2
Kubernetes e2e suite [It] [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected configMap should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup]
Kubernetes e2e suite [It] [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
Kubernetes e2e suite [It] [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected downwardAPI should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]
Kubernetes e2e suite [It] [sig-storage] Projected downwardAPI should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup]
Kubernetes e2e suite [It] [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected secret should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance]
Kubernetes e2e suite [It] [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Secrets should be immutable if `immutable` field is set [Conformance]
Kubernetes e2e suite [It] [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [Conformance]
Kubernetes e2e suite [It] [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [Conformance]
Kubernetes e2e suite [It] [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [Conformance]
Kubernetes e2e suite [It] [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [Conformance]
Kubernetes e2e suite [It] [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [Conformance]
Kubernetes e2e suite [It] [sig-storage] Subpath Container restart should verify that container can restart successfully after configmaps modified
Kubernetes e2e suite [It] [sig-storage] Volumes ConfigMap should be mountable
Kubernetes e2e suite [ReportAfterSuite] Kubernetes e2e suite report
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
kubetest2 Down
kubetest2 Up
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should create read-only inline ephemeral volume
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should create read/write inline ephemeral volume
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should support expansion of pvcs created for ephemeral pvcs
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should support multiple inline ephemeral volumes
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should support two pods which have the same volume definition
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)] provisioning should mount multiple PV pointing to the same storage on the same node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with any volume data source [Serial]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with mount options
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with pvc data source
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with pvc data source in parallel [Slow]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)] volume-stress multiple pods should access different volumes repeatedly [Slow] [Serial]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] capacity provides storage capacity information
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with any volume data source [Serial]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source in parallel [Slow]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] volume-stress multiple pods should access different volumes repeatedly [Slow] [Serial]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (delayed binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ext3)] volumes should store data
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (filesystem volmode)] volume-lifecycle-performance should provision volumes at scale within performance constraints [Slow] [Serial]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits should support volume limits [Serial]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (immediate binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should mount multiple PV pointing to the same storage on the same node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with any volume data source [Serial]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with mount options
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with pvc data source
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with pvc data source in parallel [Slow]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support existing directory
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support non-existent path
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes should store data
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (xfs)][Slow] volumes should store data
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic Snapshot (delete policy)] snapshottable-stress[Feature:VolumeSnapshotDataSource] should support snapshotting of many volumes repeatedly [Slow] [Serial]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works, check deletion (ephemeral)
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic Snapshot (retain policy)] snapshottable-stress[Feature:VolumeSnapshotDataSource] should support snapshotting of many volumes repeatedly [Slow] [Serial]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works, check deletion (ephemeral)
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Ephemeral Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Ephemeral Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works, check deletion (ephemeral)
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Ephemeral Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Ephemeral Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works, check deletion (ephemeral)
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should create read-only inline ephemeral volume
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should support multiple inline ephemeral volumes
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Generic Ephemeral-volume (default fs)] volumeLimits should support volume limits [Serial]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (default fs)] subPath should support existing directory
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (default fs)] volumes should store data
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (ext3)] volumes should store data
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (ext4)] volumes should store data
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes should store data
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (xfs)][Slow] volumes should store data
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (block volmode)] volumes should store data
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (ext3)] volumes should store data
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (ext4)] volumes should store data
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes should store data
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (xfs)][Slow] volumes should store data
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works, check deletion (ephemeral)
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)
Kubernetes e2e suite [It] External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works, check deletion (ephemeral)
Kubernetes e2e suite [It] [sig-api-machinery] API priority and fairness should ensure that requests can't be drowned out (fairness)
Kubernetes e2e suite [It] [sig-api-machinery] API priority and fairness should ensure that requests can't be drowned out (priority)
Kubernetes e2e suite [It] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] [Flaky] kubectl explain works for CR with the same resource name as built-in object.
Kubernetes e2e suite [It] [sig-api-machinery] Etcd failure [Disruptive] should recover from SIGKILL
Kubernetes e2e suite [It] [sig-api-machinery] Etcd failure [Disruptive] should recover from network partition with master
Kubernetes e2e suite [It] [sig-api-machinery] Namespaces [Serial] should always delete fast (ALL of 100 namespaces in 150 seconds) [Feature:ComprehensiveNamespaceDraining]
Kubernetes e2e suite [It] [sig-api-machinery] Namespaces [Serial] should apply changes to a namespace status [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Namespaces [Serial] should delete fast enough (90 percent of 100 namespaces in 150 seconds)
Kubernetes e2e suite [It] [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota [Feature:PodPriority] should verify ResourceQuota's multiple priority class scope (quota set to pod count: 2) against 2 pods with same priority classes.
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota [Feature:PodPriority] should verify ResourceQuota's priority class scope (cpu, memory quota set) against a pod with same priority class.
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota [Feature:PodPriority] should verify ResourceQuota's priority class scope (quota set to pod count: 1) against 2 pods with different priority class.
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota [Feature:PodPriority] should verify ResourceQuota's priority class scope (quota set to pod count: 1) against 2 pods with same priority class.
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota [Feature:PodPriority] should verify ResourceQuota's priority class scope (quota set to pod count: 1) against a pod with different priority class (ScopeSelectorOpExists).
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota [Feature:PodPriority] should verify ResourceQuota's priority class scope (quota set to pod count: 1) against a pod with different priority class (ScopeSelectorOpNotIn).
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota [Feature:PodPriority] should verify ResourceQuota's priority class scope (quota set to pod count: 1) against a pod with same priority class.
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota [Feature:ScopeSelectors] should verify ResourceQuota with best effort scope using scope-selectors.
Kubernetes e2e suite [It] [sig-api-machinery] ResourceQuota [Feature:ScopeSelectors] should verify ResourceQuota with terminating scopes through scope selectors.
Kubernetes e2e suite [It] [sig-api-machinery] Servers with support for API chunking should support continue listing from the last key if the original version has been compacted away, though the list is inconsistent [Slow]
Kubernetes e2e suite [It] [sig-api-machinery] StorageVersion resources [Feature:StorageVersionAPI] storage version with non-existing id should be GC'ed
Kubernetes e2e suite [It] [sig-apps] ControllerRevision [Serial] should manage the lifecycle of a ControllerRevision [Conformance]
Kubernetes e2e suite [It] [sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]
Kubernetes e2e suite [It] [sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]
Kubernetes e2e suite [It] [sig-apps] Daemon set [Serial] should list and delete a collection of DaemonSets [Conformance]
Kubernetes e2e suite [It] [sig-apps] Daemon set [Serial] should not update pod when spec was updated and update strategy is OnDelete
Kubernetes e2e suite [It] [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]
Kubernetes e2e suite [It] [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]
Kubernetes e2e suite [It] [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]
Kubernetes e2e suite [It] [sig-apps] Daemon set [Serial] should run and stop complex daemon with node affinity
Kubernetes e2e suite [It] [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]
Kubernetes e2e suite [It] [sig-apps] Daemon set [Serial] should surge pods onto nodes when spec was updated and update strategy is RollingUpdate
Kubernetes e2e suite [It] [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
Kubernetes e2e suite [It] [sig-apps] Daemon set [Serial] should verify changes to a daemon set status [Conformance]
Kubernetes e2e suite [It] [sig-apps] DaemonRestart [Disruptive] Controller Manager should not create/delete replicas across restart
Kubernetes e2e suite [It] [sig-apps] DaemonRestart [Disruptive] Kube-proxy should recover after being killed accidentally
Kubernetes e2e suite [It] [sig-apps] DaemonRestart [Disruptive] Kubelet should not restart containers across restart
Kubernetes e2e suite [It] [sig-apps] DaemonRestart [Disruptive] Scheduler should continue assigning pods to nodes across restart
Kubernetes e2e suite [It] [sig-apps] DisruptionController evictions: maxUnavailable deny evictions, integer => should not allow an eviction [Serial]
Kubernetes e2e suite [It] [sig-apps] DisruptionController evictions: too few pods, replicaSet, percentage => should not allow an eviction [Serial]
Kubernetes e2e suite [It] [sig-apps] Job should run a job to completion with CPU requests [Serial]
Kubernetes e2e suite [It] [sig-apps] ReplicaSet should serve a basic image on each replica with a private image
Kubernetes e2e suite [It] [sig-apps] ReplicationController should serve a basic image on each replica with a private image
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
Kubernetes e2e suite [It] [sig-apps] StatefulSet Deploy clustered applications [Feature:StatefulSet] [Slow] should creating a working CockroachDB cluster
Kubernetes e2e suite [It] [sig-apps] StatefulSet Deploy clustered applications [Feature:StatefulSet] [Slow] should creating a working mysql cluster
Kubernetes e2e suite [It] [sig-apps] StatefulSet Deploy clustered applications [Feature:StatefulSet] [Slow] should creating a working redis cluster
Kubernetes e2e suite [It] [sig-apps] StatefulSet Deploy clustered applications [Feature:StatefulSet] [Slow] should creating a working zookeeper cluster
Kubernetes e2e suite [It] [sig-apps] StatefulSet Non-retain StatefulSetPersistentVolumeClaimPolicy [Feature:StatefulSetAutoDeletePVC] should delete PVCs after adopting pod (WhenDeleted)
Kubernetes e2e suite [It] [sig-apps] StatefulSet Non-retain StatefulSetPersistentVolumeClaimPolicy [Feature:StatefulSetAutoDeletePVC] should delete PVCs after adopting pod (WhenScaled) [Feature:StatefulSetAutoDeletePVC]
Kubernetes e2e suite [It] [sig-apps] StatefulSet Non-retain StatefulSetPersistentVolumeClaimPolicy [Feature:StatefulSetAutoDeletePVC] should delete PVCs with a OnScaledown policy
Kubernetes e2e suite [It] [sig-apps] StatefulSet Non-retain StatefulSetPersistentVolumeClaimPolicy [Feature:StatefulSetAutoDeletePVC] should delete PVCs with a WhenDeleted policy
Kubernetes e2e suite [It] [sig-apps] stateful Upgrade [Feature:StatefulUpgrade] stateful upgrade should maintain a functioning cluster
Kubernetes e2e suite [It] [sig-auth] ServiceAccount admission controller migration [Feature:BoundServiceAccountTokenVolume] master upgrade should maintain a functioning cluster
Kubernetes e2e suite [It] [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow]
Kubernetes e2e suite [It] [sig-auth] [Feature:NodeAuthenticator] The kubelet can delegate ServiceAccount tokens to the API server
Kubernetes e2e suite [It] [sig-auth] [Feature:NodeAuthenticator] The kubelet's main port 10250 should reject requests with no credentials
Kubernetes e2e suite [It] [sig-auth] [Feature:NodeAuthorizer] A node shouldn't be able to create another node
Kubernetes e2e suite [It] [sig-auth] [Feature:NodeAuthorizer] A node shouldn't be able to delete another node
Kubernetes e2e suite [It] [sig-auth] [Feature:NodeAuthorizer] Getting a non-existent configmap should exit with the Forbidden error, not a NotFound error
Kubernetes e2e suite [It] [sig-auth] [Feature:NodeAuthorizer] Getting a non-existent secret should exit with the Forbidden error, not a NotFound error
Kubernetes e2e suite [It] [sig-auth] [Feature:NodeAuthorizer] Getting a secret for a workload the node has access to should succeed
Kubernetes e2e suite [It] [sig-auth] [Feature:NodeAuthorizer] Getting an existing configmap should exit with the Forbidden error
Kubernetes e2e suite [It] [sig-auth] [Feature:NodeAuthorizer] Getting an existing secret should exit with the Forbidden error
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaler scalability [Slow] CA ignores unschedulable pods while scheduling schedulable pods [Feature:ClusterAutoscalerScalability6]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaler scalability [Slow] should scale down empty nodes [Feature:ClusterAutoscalerScalability3]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaler scalability [Slow] should scale down underutilized nodes [Feature:ClusterAutoscalerScalability4]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaler scalability [Slow] should scale up at all [Feature:ClusterAutoscalerScalability1]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaler scalability [Slow] should scale up twice [Feature:ClusterAutoscalerScalability2]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaler scalability [Slow] shouldn't scale down with underutilized nodes due to host port conflicts [Feature:ClusterAutoscalerScalability5]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] Should be able to scale a node group down to 0[Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] Should be able to scale a node group up from 0[Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] Should not scale GPU pool up if pod does not require GPUs [GpuType:] [Feature:ClusterSizeAutoscalingGpu]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] Should scale down GPU pool from 1 [GpuType:] [Feature:ClusterSizeAutoscalingGpu]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] Should scale up GPU pool from 0 [GpuType:] [Feature:ClusterSizeAutoscalingGpu]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] Should scale up GPU pool from 1 [GpuType:] [Feature:ClusterSizeAutoscalingGpu]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] Shouldn't perform scale up operation and should list unhealthy status if most of the cluster is broken[Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should add node to the particular mig [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should be able to scale down by draining multiple pods one by one as dictated by pdb[Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should be able to scale down by draining system pods with pdb[Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should be able to scale down when rescheduling a pod is required and pdb allows for it[Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should correctly scale down after a node is not needed [Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should correctly scale down after a node is not needed and one node is broken [Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should correctly scale down after a node is not needed when there is non autoscaled pool[Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should disable node pool autoscaling [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should increase cluster size if pending pods are small [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should increase cluster size if pending pods are small and one node is broken [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should increase cluster size if pending pods are small and there is another node pool that is not autoscaled [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should increase cluster size if pod requesting EmptyDir volume is pending [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should increase cluster size if pod requesting volume is pending [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should increase cluster size if pods are pending due to host port conflict [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should increase cluster size if pods are pending due to pod anti-affinity [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should scale down when expendable pod is running [Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should scale up correct target pool [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] should scale up when non expendable pod is created [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't be able to scale down when rescheduling a pod is required, but pdb doesn't allow drain[Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't increase cluster size if pending pod is too large [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't scale down when non expendable pod is running [Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't scale up when expendable pod is created [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't scale up when expendable pod is preempted [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't trigger additional scale-ups during processing scale-up [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [It] [sig-autoscaling] DNS horizontal autoscaling [Serial] [Slow] kube-dns-autoscaler should scale kube-dns pods when cluster size changed
Kubernetes e2e suite [It] [sig-autoscaling] DNS horizontal autoscaling kube-dns-autoscaler should scale kube-dns pods in both nonfaulty and faulty scenarios
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:ClusterSizeAutoscalingScaleUp] [Slow] Autoscaling Autoscaling a service from 1 pod and 3 nodes to 8 pods and >=4 nodes takes less than 15 minutes
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) ReplicationController light Should scale from 1 pod to 2 pods
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) ReplicationController light Should scale from 2 pods to 1 pod [Slow]
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] Deployment Should scale from 1 pod to 3 pods and from 3 to 5
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] Deployment Should scale from 5 pods to 3 pods and from 3 to 1
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicaSet Should scale from 1 pod to 3 pods and from 3 to 5
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicaSet Should scale from 5 pods to 3 pods and from 3 to 1
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicaSet with idle sidecar (ContainerResource use case) Should not scale up on a busy sidecar with an idle application
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicaSet with idle sidecar (ContainerResource use case) Should scale from 1 pod to 3 pods and from 3 to 5 on a busy application with an idle sidecar container
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicationController Should scale from 1 pod to 3 pods and from 3 to 5 and verify decision stability
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicationController Should scale from 5 pods to 3 pods and from 3 to 1 and verify decision stability
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) with autoscaling disabled shouldn't scale down
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) with autoscaling disabled shouldn't scale up
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) with long upscale stabilization window should scale up only after the stabilization period
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) with scale limited by number of Pods rate should scale down no more than given number of Pods per minute
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) with scale limited by number of Pods rate should scale up no more than given number of Pods per minute
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) with scale limited by percentage should scale down no more than given percentage of current Pods per minute
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) with scale limited by percentage should scale up no more than given percentage of current Pods per minute
Kubernetes e2e suite [It] [sig-autoscaling] [Feature:HPA] [Serial] [Slow] Horizontal pod autoscaling (non-default behavior) with short downscale stabilization window should scale down soon after the stabilization period
Kubernetes e2e suite [It] [sig-autoscaling] [HPA] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) should scale down with Custom Metric of type Object from Stackdriver [Feature:CustomMetricsAutoscaling]
Kubernetes e2e suite [It] [sig-autoscaling] [HPA] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) should scale down with Custom Metric of type Pod from Stackdriver [Feature:CustomMetricsAutoscaling]
Kubernetes e2e suite [It] [sig-autoscaling] [HPA] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) should scale down with Custom Metric of type Pod from Stackdriver with Prometheus [Feature:CustomMetricsAutoscaling]
Kubernetes e2e suite [It] [sig-autoscaling] [HPA] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) should scale down with External Metric with target average value from Stackdriver [Feature:CustomMetricsAutoscaling]
Kubernetes e2e suite [It] [sig-autoscaling] [HPA] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) should scale down with External Metric with target value from Stackdriver [Feature:CustomMetricsAutoscaling]
Kubernetes e2e suite [It] [sig-autoscaling] [HPA] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) should scale up with two External metrics from Stackdriver [Feature:CustomMetricsAutoscaling]
Kubernetes e2e suite [It] [sig-autoscaling] [HPA] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) should scale up with two metrics of type Pod from Stackdriver [Feature:CustomMetricsAutoscaling]
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl taint [Serial] should remove all the taints with the same key off a node
Kubernetes e2e suite [It] [sig-cli] Kubectl client Kubectl taint [Serial] should update the taint on a node
Kubernetes e2e suite [It] [sig-cli] Kubectl client Simple pod should return command exit codes [Slow] running a failing command with --leave-stdin-open
Kubernetes e2e suite [It] [sig-cli] Kubectl client Simple pod should return command exit codes [Slow] running a failing command without --restart=Never
Kubernetes e2e suite [It] [sig-cli] Kubectl client Simple pod should return command exit codes [Slow] running a failing command without --restart=Never, but with --rm
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Addon update should propagate add-on file changes [Slow]
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Downgrade [Feature:Downgrade] cluster downgrade should maintain a functioning cluster [Feature:ClusterDowngrade]
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] GKE node pools [Feature:GKENodePool] should create a cluster with multiple node pools [Feature:GKENodePool]
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] HA-master [Feature:HAMaster] survive addition/removal replicas different zones [Serial][Disruptive]
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] HA-master [Feature:HAMaster] survive addition/removal replicas multizone workers [Serial][Disruptive]
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] HA-master [Feature:HAMaster] survive addition/removal replicas same zone [Serial][Disruptive]
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Nodes [Disruptive] Resize [Slow] should be able to add nodes
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Nodes [Disruptive] Resize [Slow] should be able to delete nodes
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Ports Security Check [Feature:KubeletSecurity] should not be able to proxy to cadvisor port 4194 using proxy subresource
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Ports Security Check [Feature:KubeletSecurity] should not be able to proxy to the readonly kubelet port 10255 using proxy subresource
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Ports Security Check [Feature:KubeletSecurity] should not have port 10255 open on its all public IP addresses
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Ports Security Check [Feature:KubeletSecurity] should not have port 4194 open on its all public IP addresses
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] each node by dropping all inbound packets for a while and ensure they function afterwards
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] each node by dropping all outbound packets for a while and ensure they function afterwards
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] each node by ordering clean reboot and ensure they function upon restart
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] each node by ordering unclean reboot and ensure they function upon restart
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] each node by switching off the network interface and ensure they function upon switch on
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] each node by triggering kernel panic and ensure they function upon restart
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Recreate [Feature:Recreate] recreate nodes and ensure they function upon restart
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Restart [Disruptive] should restart all nodes and ensure all nodes and pods recover
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Upgrade [Feature:Upgrade] cluster upgrade should maintain a functioning cluster [Feature:ClusterUpgrade]
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] Upgrade [Feature:Upgrade] master upgrade should maintain a functioning cluster [Feature:MasterUpgrade]
Kubernetes e2e suite [It] [sig-cloud-provider-gcp] [Disruptive]NodeLease NodeLease deletion node lease should be deleted when corresponding node is deleted
Kubernetes e2e suite [It] [sig-cloud-provider] [Feature:CloudProvider][Disruptive] Nodes should be deleted on API server if it doesn't exist in the cloud provider
Kubernetes e2e suite [It] [sig-cluster-lifecycle] [Feature:BootstrapTokens] should delete the signed bootstrap tokens from clusterInfo ConfigMap when bootstrap token is deleted
Kubernetes e2e suite [It] [sig-cluster-lifecycle] [Feature:BootstrapTokens] should delete the token secret when the secret expired
Kubernetes e2e suite [It] [sig-cluster-lifecycle] [Feature:BootstrapTokens] should not delete the token secret when the secret is not expired
Kubernetes e2e suite [It] [sig-cluster-lifecycle] [Feature:BootstrapTokens] should resign the bootstrap tokens when the clusterInfo ConfigMap updated [Serial][Disruptive]
Kubernetes e2e suite [It] [sig-cluster-lifecycle] [Feature:BootstrapTokens] should sign the new added bootstrap tokens
Kubernetes e2e suite [It] [sig-instrumentation] Logging soak [Performance] [Slow] [Disruptive] should survive logging 1KB every 1s seconds, for a duration of 2m0s
Kubernetes e2e suite [It] [sig-instrumentation] Stackdriver Monitoring should have accelerator metrics [Feature:StackdriverAcceleratorMonitoring]
Kubernetes e2e suite [It] [sig-instrumentation] Stackdriver Monitoring should have cluster metrics [Feature:StackdriverMonitoring]
Kubernetes e2e suite [It] [sig-instrumentation] Stackdriver Monitoring should run Custom Metrics - Stackdriver Adapter for external metrics [Feature:StackdriverExternalMetrics]
Kubernetes e2e suite [It] [sig-instrumentation] Stackdriver Monitoring should run Custom Metrics - Stackdriver Adapter for new resource model [Feature:StackdriverCustomMetrics]
Kubernetes e2e suite [It] [sig-instrumentation] Stackdriver Monitoring should run Custom Metrics - Stackdriver Adapter for old resource model [Feature:StackdriverCustomMetrics]
Kubernetes e2e suite [It] [sig-instrumentation] Stackdriver Monitoring should run Stackdriver Metadata Agent [Feature:StackdriverMetadataAgent]
Kubernetes e2e suite [It] [sig-network] ClusterDns [Feature:Example] should create pod that uses dns
Kubernetes e2e suite [It] [sig-network] DNS configMap nameserver Change stubDomain should be able to change stubDomain configuration [Slow][Serial]
Kubernetes e2e suite [It] [sig-network] DNS configMap nameserver Forward PTR lookup should forward PTR records lookup to upstream nameserver [Slow][Serial]
Kubernetes e2e suite [It] [sig-network] DNS configMap nameserver Forward external name lookup should forward externalname lookup to upstream nameserver [Slow][Serial]
Kubernetes e2e suite [It] [sig-network] DNS should provide DNS for the cluster [Provider:GCE]
Kubernetes e2e suite [It] [sig-network] Firewall rule [Slow] [Serial] should create valid firewall rules for LoadBalancer type service
Kubernetes e2e suite [It] [sig-network] Firewall rule control plane should not expose well-known ports
Kubernetes e2e suite [It] [sig-network] Firewall rule should have correct firewall rules for e2e cluster
Kubernetes e2e suite [It] [sig-network] IngressClass [Feature:Ingress] should allow IngressClass to have Namespace-scoped parameters [Serial]
Kubernetes e2e suite [It] [sig-network] IngressClass [Feature:Ingress] should choose the one with the later CreationTimestamp, if equal the one with the lower name when two ingressClasses are marked as default[Serial]
Kubernetes e2e suite [It] [sig-network] IngressClass [Feature:Ingress] should not set default value if no default IngressClass [Serial]
Kubernetes e2e suite [It] [sig-network] IngressClass [Feature:Ingress] should set default value on new IngressClass [Serial]
Kubernetes e2e suite [It] [sig-network] LoadBalancers ESIPP [Slow] should handle updates to ExternalTrafficPolicy field
Kubernetes e2e suite [It] [sig-network] LoadBalancers ESIPP [Slow] should only target nodes with endpoints
Kubernetes e2e suite [It] [sig-network] LoadBalancers ESIPP [Slow] should work for type=LoadBalancer
Kubernetes e2e suite [It] [sig-network] LoadBalancers ESIPP [Slow] should work for type=NodePort
Kubernetes e2e suite [It] [sig-network] LoadBalancers ESIPP [Slow] should work from pods
Kubernetes e2e suite [It] [sig-network] LoadBalancers should be able to change the type and ports of a TCP service [Slow]
Kubernetes e2e suite [It] [sig-network] LoadBalancers should be able to change the type and ports of a UDP service [Slow]
Kubernetes e2e suite [It] [sig-network] LoadBalancers should be able to create LoadBalancer Service without NodePort and change it [Slow]
Kubernetes e2e suite [It] [sig-network] LoadBalancers should be able to create an internal type load balancer [Slow]
Kubernetes e2e suite [It] [sig-network] LoadBalancers should be able to switch session affinity for LoadBalancer service with ESIPP off [Slow] [DisabledForLargeClusters] [LinuxOnly]
Kubernetes e2e suite [It] [sig-network] LoadBalancers should be able to switch session affinity for LoadBalancer service with ESIPP on [Slow] [DisabledForLargeClusters] [LinuxOnly]
Kubernetes e2e suite [It] [sig-network] LoadBalancers should handle load balancer cleanup finalizer for service [Slow]
Kubernetes e2e suite [It] [sig-network] LoadBalancers should have session affinity work for LoadBalancer service with ESIPP off [Slow] [DisabledForLargeClusters] [LinuxOnly]
Kubernetes e2e suite [It] [sig-network] LoadBalancers should have session affinity work for LoadBalancer service with ESIPP on [Slow] [DisabledForLargeClusters] [LinuxOnly]
Kubernetes e2e suite [It] [sig-network] LoadBalancers should only allow access from service loadbalancer source ranges [Slow]
Kubernetes e2e suite [It] [sig-network] LoadBalancers should reconcile LB health check interval [Slow][Serial][Disruptive]
Kubernetes e2e suite [It] [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:Ingress] should conform to Ingress spec
Kubernetes e2e suite [It] [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:NEG] [Flaky] rolling update backend pods should not cause service disruption
Kubernetes e2e suite [It] [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:NEG] [Flaky] should be able to create a ClusterIP service
Kubernetes e2e suite [It] [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:NEG] [Flaky] should be able to switch between IG and NEG modes
Kubernetes e2e suite [It] [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:NEG] [Flaky] should conform to Ingress spec
Kubernetes e2e suite [It] [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:NEG] [Flaky] should create NEGs for all ports with the Ingress annotation, and NEGs for the standalone annotation otherwise
Kubernetes e2e suite [It] [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:NEG] [Flaky] should sync endpoints for both Ingress-referenced NEG and standalone NEG
Kubernetes e2e suite [It] [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:NEG] [Flaky] should sync endpoints to NEG
Kubernetes e2e suite [It] [sig-network] Loadbalancing: L7 Scalability GCE [Slow] [Serial] [Feature:IngressScale] Creating and updating ingresses should happen promptly with small/medium/large amount of ingresses
Kubernetes e2e suite [It] [sig-network] Netpol API should support creating NetworkPolicy with Status subresource [Feature:NetworkPolicyStatus]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should allow egress access on one named port [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should allow egress access to server in CIDR block [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should allow ingress access from namespace on one named port [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should allow ingress access from updated namespace [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should allow ingress access from updated pod [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should allow ingress access on one named port [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should deny egress from all pods in a namespace [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should deny egress from pods based on PodSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should deny ingress access to updated pod [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should deny ingress from pods on other namespaces [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce egress policy allowing traffic to a server in a different namespace based on PodSelector and NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce except clause while egress access to server in CIDR block [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce ingress policy allowing any port traffic to a server on a specific protocol [Feature:NetworkPolicy] [Feature:UDP]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce multiple egress policies with egress allow-all policy taking precedence [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce multiple ingress policies with ingress allow-all policy taking precedence [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce multiple, stacked policies with overlapping podSelectors [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce policies to check ingress and egress policies can be controlled independently based on PodSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce policy based on Multiple PodSelectors and NamespaceSelectors [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce policy based on NamespaceSelector with MatchExpressions using default ns label [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce policy based on NamespaceSelector with MatchExpressions[Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce policy based on PodSelector and NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce policy based on PodSelector or NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce policy based on PodSelector with MatchExpressions[Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce policy based on Ports [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce policy based on any PodSelectors [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce policy to allow ingress traffic for a target [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce policy to allow ingress traffic from pods in all namespaces [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce policy to allow traffic based on NamespaceSelector with MatchLabels using default ns label [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce policy to allow traffic from pods within server namespace based on PodSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce policy to allow traffic only from a different namespace, based on NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce policy to allow traffic only from a pod in a different namespace based on PodSelector and NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should enforce updated policy [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should ensure an IP overlapping both IPBlock.CIDR and IPBlock.Except is allowed [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should not allow access by TCP when a policy specifies only UDP [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should not mistakenly treat 'protocol: SCTP' as 'protocol: TCP', even if the plugin doesn't support SCTP [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should properly isolate pods that are selected by a policy allowing SCTP, even if the plugin doesn't support SCTP [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should stop enforcing policies after they are deleted [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should support a 'default-deny-all' policy [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should support a 'default-deny-ingress' policy [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should support allow-all policy [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should support denying of egress traffic on the client side (even if the server explicitly allows this traffic) [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol NetworkPolicy between server and client should work with Ingress, Egress specified together [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol [Feature:SCTPConnectivity][LinuxOnly][Disruptive] NetworkPolicy between server and client using SCTP should enforce policy based on Ports [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol [Feature:SCTPConnectivity][LinuxOnly][Disruptive] NetworkPolicy between server and client using SCTP should enforce policy to allow traffic only from a pod in a different namespace based on PodSelector and NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol [Feature:SCTPConnectivity][LinuxOnly][Disruptive] NetworkPolicy between server and client using SCTP should support a 'default-deny-ingress' policy [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol [LinuxOnly] NetworkPolicy between server and client using UDP should enforce policy based on Ports [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol [LinuxOnly] NetworkPolicy between server and client using UDP should enforce policy to allow traffic only from a pod in a different namespace based on PodSelector and NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Netpol [LinuxOnly] NetworkPolicy between server and client using UDP should support a 'default-deny-ingress' policy [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicy [Feature:SCTPConnectivity][LinuxOnly][Disruptive] NetworkPolicy between server and client using SCTP should enforce policy based on Ports [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicy [Feature:SCTPConnectivity][LinuxOnly][Disruptive] NetworkPolicy between server and client using SCTP should enforce policy to allow traffic only from a pod in a different namespace based on PodSelector and NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicy [Feature:SCTPConnectivity][LinuxOnly][Disruptive] NetworkPolicy between server and client using SCTP should support a 'default-deny' policy [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should allow egress access on one named port [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should allow egress access to server in CIDR block [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should allow ingress access from namespace on one named port [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should allow ingress access from updated namespace [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should allow ingress access from updated pod [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should allow ingress access on one named port [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should deny ingress access to updated pod [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce egress policy allowing traffic to a server in a different namespace based on PodSelector and NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce except clause while egress access to server in CIDR block [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce multiple egress policies with egress allow-all policy taking precedence [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce multiple ingress policies with ingress allow-all policy taking precedence [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce multiple, stacked policies with overlapping podSelectors [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policies to check ingress and egress policies can be controlled independently based on PodSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy based on NamespaceSelector with MatchExpressions[Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy based on PodSelector and NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy based on PodSelector or NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy based on PodSelector with MatchExpressions[Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy based on Ports [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy to allow traffic from pods within server namespace based on PodSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy to allow traffic only from a different namespace, based on NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy to allow traffic only from a pod in a different namespace based on PodSelector and NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce updated policy [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should ensure an IP overlapping both IPBlock.CIDR and IPBlock.Except is allowed [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should not allow access by TCP when a policy specifies only SCTP [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should stop enforcing policies after they are deleted [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should support a 'default-deny-all' policy [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should support a 'default-deny-ingress' policy [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should support allow-all policy [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should work with Ingress,Egress specified together [Feature:NetworkPolicy]
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: sctp [LinuxOnly][Feature:SCTPConnectivity][Disruptive]
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Pods should function for node-pod communication: sctp [LinuxOnly][Feature:SCTPConnectivity][Disruptive]
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should be able to handle large requests: http
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should be able to handle large requests: udp
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should function for client IP based session affinity: http [LinuxOnly]
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should function for client IP based session affinity: udp [LinuxOnly]
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should function for endpoint-Service: http
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should function for endpoint-Service: sctp [Feature:SCTPConnectivity][Disruptive]
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should function for endpoint-Service: udp
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should function for multiple endpoint-Services with same selector
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should function for node-Service: http
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should function for node-Service: sctp [Feature:SCTPConnectivity][Disruptive]
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should function for pod-Service: http
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should function for pod-Service: sctp [Feature:SCTPConnectivity][Disruptive]
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should function for pod-Service: udp
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should function for service endpoints using hostNetwork
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should support basic nodePort: udp functionality
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should update endpoints: http
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should update endpoints: udp
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should update nodePort: http [Slow]
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Services should update nodePort: udp [Slow]
Kubernetes e2e suite [It] [sig-network] Networking IPerf2 [Feature:Networking-Performance] should run iperf2
Kubernetes e2e suite [It] [sig-network] Networking should check kube-proxy urls
Kubernetes e2e suite [It] [sig-network] Networking should provide Internet connection for containers [Feature:Networking-IPv4]
Kubernetes e2e suite [It] [sig-network] Networking should provide Internet connection for containers [Feature:Networking-IPv6][Experimental][LinuxOnly]
Kubernetes e2e suite [It] [sig-network] Networking should provider Internet connection for containers using DNS [Feature:Networking-DNS]
Kubernetes e2e suite [It] [sig-network] Networking should recreate its iptables rules if they are deleted [Disruptive]
Kubernetes e2e suite [It] [sig-network] NoSNAT [Feature:NoSNAT] [Slow] Should be able to send traffic between Pods without SNAT
Kubernetes e2e suite [It] [sig-network] Services GCE [Slow] should be able to create and tear down a standard-tier load balancer [Slow]
Kubernetes e2e suite [It] [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should be rejected for evicted pods (no endpoints exist)
Kubernetes e2e suite [It] [sig-network] Services should be rejected when no endpoints exist
Kubernetes e2e suite [It] [sig-network] Services should fail health check node port if there are only terminating endpoints [Feature:ProxyTerminatingEndpoints]
Kubernetes e2e suite [It] [sig-network] Services should fallback to local terminating endpoints when there are no ready endpoints with externalTrafficPolicy=Local [Feature:ProxyTerminatingEndpoints]
Kubernetes e2e suite [It] [sig-network] Services should fallback to local terminating endpoints when there are no ready endpoints with internalTrafficPolicy=Local [Feature:ProxyTerminatingEndpoints]
Kubernetes e2e suite [It] [sig-network] Services should fallback to terminating endpoints when there are no ready endpoints with externallTrafficPolicy=Cluster [Feature:ProxyTerminatingEndpoints]
Kubernetes e2e suite [It] [sig-network] Services should fallback to terminating endpoints when there are no ready endpoints with internalTrafficPolicy=Cluster [Feature:ProxyTerminatingEndpoints]
Kubernetes e2e suite [It] [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]
Kubernetes e2e suite [It] [sig-network] Services should respect internalTrafficPolicy=Local Pod (hostNetwork: true) to Pod [Feature:ServiceInternalTrafficPolicy]
Kubernetes e2e suite [It] [sig-network] Services should respect internalTrafficPolicy=Local Pod and Node, to Pod (hostNetwork: true) [Feature:ServiceInternalTrafficPolicy]
Kubernetes e2e suite [It] [sig-network] Services should respect internalTrafficPolicy=Local Pod to Pod [Feature:ServiceInternalTrafficPolicy]
Kubernetes e2e suite [It] [sig-network] Services should work after restarting apiserver [Disruptive]
Kubernetes e2e suite [It] [sig-network] Services should work after restarting kube-proxy [Disruptive]
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should be able to handle large requests: http
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should be able to handle large requests: udp
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should function for client IP based session affinity: http [LinuxOnly]
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should function for client IP based session affinity: udp [LinuxOnly]
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should function for endpoint-Service: http
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should function for endpoint-Service: udp
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should function for node-Service: http
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should function for node-Service: udp
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should function for pod-Service: http
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should function for pod-Service: sctp [Feature:SCTPConnectivity][Disruptive]
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should function for pod-Service: udp
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should function for service endpoints using hostNetwork
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should update endpoints: http
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should update endpoints: udp
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] should be able to reach pod on ipv4 and ipv6 ip
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] should create a single stack service with cluster ip from primary service range
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] should create pod, add ipv6 and ipv4 ip to pod ips
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] should create service with ipv4 cluster ip
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] should create service with ipv4,v6 cluster ip
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] should create service with ipv6 cluster ip
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] should create service with ipv6,v4 cluster ip
Kubernetes e2e suite [It] [sig-network] [Feature:IPv6DualStack] should have ipv4 and ipv6 internal node ip
Kubernetes e2e suite [It] [sig-network] [Feature:PerformanceDNS][Serial] Should answer DNS query for maximum number of services per cluster
Kubernetes e2e suite [It] [sig-network] [Feature:Topology Hints] should distribute endpoints evenly
Kubernetes e2e suite [It] [sig-network] kube-proxy migration [Feature:KubeProxyDaemonSetMigration] Downgrade kube-proxy from a DaemonSet to static pods should maintain a functioning cluster [Feature:KubeProxyDaemonSetDowngrade]
Kubernetes e2e suite [It] [sig-network] kube-proxy migration [Feature:KubeProxyDaemonSetMigration] Upgrade kube-proxy from static pods to a DaemonSet should maintain a functioning cluster [Feature:KubeProxyDaemonSetUpgrade]
Kubernetes e2e suite [It] [sig-node] AppArmor load AppArmor profiles can disable an AppArmor profile, using unconfined
Kubernetes e2e suite [It] [sig-node] AppArmor load AppArmor profiles should enforce an AppArmor profile
Kubernetes e2e suite [It] [sig-node] Downward API [Serial] [Disruptive] [NodeFeature:DownwardAPIHugePages] Downward API tests for hugepages should provide container's limits.hugepages-<pagesize> and requests.hugepages-<pagesize> as env vars
Kubernetes e2e suite [It] [sig-node] Downward API [Serial] [Disruptive] [NodeFeature:DownwardAPIHugePages] Downward API tests for hugepages should provide default limits.hugepages-<pagesize> from node allocatable
Kubernetes e2e suite [It] [sig-node] Kubelet [Serial] [Slow] experimental resource usage tracking [Feature:ExperimentalResourceUsageTracking] resource tracking for 100 pods per node
Kubernetes e2e suite [It] [sig-node] Kubelet [Serial] [Slow] regular resource usage tracking [Feature:RegularResourceUsageTracking] resource tracking for 0 pods per node
Kubernetes e2e suite [It] [sig-node] Kubelet [Serial] [Slow] regular resource usage tracking [Feature:RegularResourceUsageTracking] resource tracking for 100 pods per node
Kubernetes e2e suite [It] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]
Kubernetes e2e suite [It] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] only evicts pods without tolerations from tainted nodes
Kubernetes e2e suite [It] [sig-node] NoExecuteTaintManager Single Pod [Serial] doesn't evict pod with tolerations from tainted nodes
Kubernetes e2e suite [It] [sig-node] NoExecuteTaintManager Single Pod [Serial] eventually evict pod with finite tolerations from tainted nodes
Kubernetes e2e suite [It] [sig-node] NoExecuteTaintManager Single Pod [Serial] evicts pods from tainted nodes
Kubernetes e2e suite [It] [sig-node] NoExecuteTaintManager Single Pod [Serial] removing taint cancels eviction [Disruptive] [Conformance]
Kubernetes e2e suite [It] [sig-node] NodeProblemDetector should run without error
Kubernetes e2e suite [It] [sig-node] Pod garbage collector [Feature:PodGarbageCollector] [Slow] should handle the creation of 1000 pods
Kubernetes e2e suite [It] [sig-node] Pods should cap back-off at MaxContainerBackOff [Slow][NodeConformance]
Kubernetes e2e suite [It] [sig-node] Pods should have their auto-restart back-off timer reset on image update [Slow][NodeConformance]
Kubernetes e2e suite [It] [sig-node] Probing container should override timeoutGracePeriodSeconds when LivenessProbe field is set [Feature:ProbeTerminationGracePeriod]
Kubernetes e2e suite [It] [sig-node] Probing container should override timeoutGracePeriodSeconds when StartupProbe field is set [Feature:ProbeTerminationGracePeriod]
Kubernetes e2e suite [It] [sig-node] RuntimeClass should run a Pod requesting a RuntimeClass with a configured handler [NodeFeature:RuntimeHandler]
Kubernetes e2e suite [It] [sig-node] RuntimeClass should run a Pod requesting a RuntimeClass with scheduling with taints [Serial]
Kubernetes e2e suite [It] [sig-node] RuntimeClass should run a Pod requesting a RuntimeClass with scheduling without taints
Kubernetes e2e suite [It] [sig-node] Security Context When creating a pod with HostUsers must create the user namespace if set to false [LinuxOnly] [Feature:UserNamespacesStatelessPodsSupport]
Kubernetes e2e suite [It] [sig-node] Security Context When creating a pod with HostUsers must not create the user namespace if set to true [LinuxOnly] [Feature:UserNamespacesStatelessPodsSupport]
Kubernetes e2e suite [It] [sig-node] Security Context When creating a pod with HostUsers should mount all volumes with proper permissions with hostUsers=false [LinuxOnly] [Feature:UserNamespacesStatelessPodsSupport]
Kubernetes e2e suite [It] [sig-node] Security Context When creating a pod with HostUsers should set FSGroup to user inside the container with hostUsers=false [LinuxOnly] [Feature:UserNamespacesStatelessPodsSupport]
Kubernetes e2e suite [It] [sig-node] Security Context should support volume SELinux relabeling [Flaky] [LinuxOnly]
Kubernetes e2e suite [It] [sig-node] Security Context should support volume SELinux relabeling when using hostIPC [Flaky] [LinuxOnly]
Kubernetes e2e suite [It] [sig-node] Security Context should support volume SELinux relabeling when using hostPID [Flaky] [LinuxOnly]
Kubernetes e2e suite [It] [sig-node] Variable Expansion should fail substituting values in a volume subpath with absolute path [Slow] [Conformance]
Kubernetes e2e suite [It] [sig-node] Variable Expansion should fail substituting values in a volume subpath with backticks [Slow] [Conformance]
Kubernetes e2e suite [It] [sig-node] Variable Expansion should succeed in writing subpaths in container [Slow] [Conformance]
Kubernetes e2e suite [It] [sig-node] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance]
Kubernetes e2e suite [It] [sig-node] [Feature:Example] Downward API should create a pod that prints his name and namespace
Kubernetes e2e suite [It] [sig-node] [Feature:Example] Liveness liveness pods should be automatically restarted
Kubernetes e2e suite [It] [sig-node] [Feature:Example] Secret should create a pod that reads a secret
Kubernetes e2e suite [It] [sig-node] crictl should be able to run crictl on the node
Kubernetes e2e suite [It] [sig-node] gpu Upgrade [Feature:GPUUpgrade] cluster downgrade should be able to run gpu pod after downgrade [Feature:GPUClusterDowngrade]
Kubernetes e2e suite [It] [sig-node] gpu Upgrade [Feature:GPUUpgrade] cluster upgrade should be able to run gpu pod after upgrade [Feature:GPUClusterUpgrade]
Kubernetes e2e suite [It] [sig-node] gpu Upgrade [Feature:GPUUpgrade] master upgrade should NOT disrupt gpu pod [Feature:GPUMasterUpgrade]
Kubernetes e2e suite [It] [sig-node] kubelet host cleanup with volume mounts [HostCleanup][Flaky] Host cleanup after disrupting NFS volume [NFS] after stopping the nfs-server and deleting the (active) client pod, the NFS mount and the pod's UID directory should be removed.
Kubernetes e2e suite [It] [sig-node] kubelet host cleanup with volume mounts [HostCleanup][Flaky] Host cleanup after disrupting NFS volume [NFS] after stopping the nfs-server and deleting the (sleeping) client pod, the NFS mount and the pod's UID directory should be removed.
Kubernetes e2e suite [It] [sig-scheduling] GPUDevicePluginAcrossRecreate [Feature:Recreate] run Nvidia GPU Device Plugin tests with a recreation
Kubernetes e2e suite [It] [sig-scheduling] Multi-AZ Clusters should spread the pods of a replication controller across zones [Serial]
Kubernetes e2e suite [It] [sig-scheduling] Multi-AZ Clusters should spread the pods of a service across zones [Serial]
Kubernetes e2e suite [It] [sig-scheduling] SchedulerPredicates [Serial] PodTopologySpread Filtering validates 4 pods with MaxSkew=1 are evenly distributed into 2 nodes
Kubernetes e2e suite [It] [sig-scheduling] SchedulerPredicates [Serial] validates local ephemeral storage resource limits of pods that are allowed to run [Feature:LocalStorageCapacityIsolation]
Kubernetes e2e suite [It] [sig-scheduling] SchedulerPredicates [Serial] validates pod overhead is considered along with resource limits of pods that are allowed to run verify pod overhead is accounted for
Kubernetes e2e suite [It] [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]
Kubernetes e2e suite [It] [sig-scheduling] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching
Kubernetes e2e suite [It] [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]
Kubernetes e2e suite [It] [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]
Kubernetes e2e suite [It] [sig-scheduling] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching
Kubernetes e2e suite [It] [sig-scheduling] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching
Kubernetes e2e suite [It] [sig-scheduling] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching
Kubernetes e2e suite [It] [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
Kubernetes e2e suite [It] [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol
Kubernetes e2e suite [It] [sig-scheduling] SchedulerPreemption [Serial] PodTopologySpread Preemption validates proper pods are preempted
Kubernetes e2e suite [It] [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance]
Kubernetes e2e suite [It] [sig-scheduling] SchedulerPreemption [Serial] PriorityClass endpoints verify PriorityClass endpoints can be operated with different HTTP methods [Conformance]
Kubernetes e2e suite [It] [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance]
Kubernetes e2e suite [It] [sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]
Kubernetes e2e suite [It] [sig-scheduling] SchedulerPriorities [Serial] Pod should be preferably scheduled to nodes pod can tolerate
Kubernetes e2e suite [It] [sig-scheduling] SchedulerPriorities [Serial] Pod should be scheduled to node that don't match the PodAntiAffinity terms
Kubernetes e2e suite [It] [sig-scheduling] SchedulerPriorities [Serial] PodTopologySpread Scoring validates pod should be preferably scheduled to node which makes the matching pods more evenly distributed
Kubernetes e2e suite [It] [sig-scheduling] [Feature:GPUDevicePlugin] run Nvidia GPU Device Plugin tests
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should support expansion of pvcs created for ephemeral pvcs
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] provisioning should mount multiple PV pointing to the same storage on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with any volume data source [Serial]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with mount options
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with pvc data source in parallel [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] volume-stress multiple pods should access different volumes repeatedly [Slow] [Serial]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] capacity provides storage capacity information
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with different fsgroup applied to the volume contents
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with same fsgroup applied to the volume contents
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with different fsgroup applied to the volume contents
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with same fsgroup skips ownership changes to the volume contents
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with any volume data source [Serial]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with mount options
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source in parallel [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] volume-stress multiple pods should access different volumes repeatedly [Slow] [Serial]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (delayed binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (delayed binding)] topology should provision a volume and schedule a pod with AllowedTopologies
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ext3)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ext4)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] volume-lifecycle-performance should provision volumes at scale within performance constraints [Slow] [Serial]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits should support volume limits [Serial]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (immediate binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (immediate binding)] topology should provision a volume and schedule a pod with AllowedTopologies
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should mount multiple PV pointing to the same storage on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with any volume data source [Serial]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with mount options
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with pvc data source
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with pvc data source in parallel [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support existing directory
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support non-existent path
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (xfs)][Slow] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic Snapshot (delete policy)] snapshottable-stress[Feature:VolumeSnapshotDataSource] should support snapshotting of many volumes repeatedly [Slow] [Serial]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works, check deletion (ephemeral)
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic Snapshot (retain policy)] snapshottable-stress[Feature:VolumeSnapshotDataSource] should support snapshotting of many volumes repeatedly [Slow] [Serial]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works, check deletion (ephemeral)
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Ephemeral Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Ephemeral Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works, check deletion (ephemeral)
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Ephemeral Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Ephemeral Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works, check deletion (ephemeral)
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should create read-only inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should support multiple inline ephemeral volumes
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs)] volumeLimits should support volume limits [Serial]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should support existing directory
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (ext3)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (ext4)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (xfs)][Slow] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (ext3)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (ext4)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (xfs)][Slow] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works, check deletion (ephemeral)
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works, check deletion (ephemeral)
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should create read-only inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should create read/write inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should support expansion of pvcs created for ephemeral pvcs
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should support multiple inline ephemeral volumes
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should support two pods which have the same volume definition
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] provisioning should mount multiple PV pointing to the same storage on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with any volume data source [Serial]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with mount options
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with pvc data source
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with pvc data source in parallel [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] volume-stress multiple pods should access different volumes repeatedly [Slow] [Serial]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] capacity provides storage capacity information
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with different fsgroup applied to the volume contents
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with same fsgroup applied to the volume contents
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with different fsgroup applied to the volume contents
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with same fsgroup skips ownership changes to the volume contents
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] provisioning should mount multiple PV pointing to the same storage on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with any volume data source [Serial]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with mount options
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source in parallel [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] volume-stress multiple pods should access different volumes repeatedly [Slow] [Serial]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (delayed binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (delayed binding)] topology should provision a volume and schedule a pod with AllowedTopologies
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ext3)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ext4)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (filesystem volmode)] volume-lifecycle-performance should provision volumes at scale within performance constraints [Slow] [Serial]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits should support volume limits [Serial]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits should verify that all csinodes have volume limits
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (immediate binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (immediate binding)] topology should provision a volume and schedule a pod with AllowedTopologies
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should mount multiple PV pointing to the same storage on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with any volume data source [Serial]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with mount options
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with pvc data source
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with pvc data source in parallel [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support existing directory
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support non-existent path
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (xfs)][Slow] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic Snapshot (delete policy)] snapshottable-stress[Feature:VolumeSnapshotDataSource] should support snapshotting of many volumes repeatedly [Slow] [Serial]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works, check deletion (ephemeral)
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic Snapshot (retain policy)] snapshottable-stress[Feature:VolumeSnapshotDataSource] should support snapshotting of many volumes repeatedly [Slow] [Serial]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works, check deletion (ephemeral)
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Ephemeral Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Ephemeral Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works, check deletion (ephemeral)
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Ephemeral Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Ephemeral Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works, check deletion (ephemeral)
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should create read-only inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should create read/write inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should support expansion of pvcs created for ephemeral pvcs
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should support multiple inline ephemeral volumes
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should support two pods which have the same volume definition
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should create read-only inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should create read/write inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should support expansion of pvcs created for ephemeral pvcs
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should support multiple inline ephemeral volumes
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should support two pods which have the same volume definition
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should create read-only inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should create read/write inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support expansion of pvcs created for ephemeral pvcs
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support multiple inline ephemeral volumes
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support two pods which have the same volume definition
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Generic Ephemeral-volume (default fs)] volumeLimits should support volume limits [Serial]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Generic Ephemeral-volume (default fs)] volumeLimits should verify that all csinodes have volume limits
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] subPath should support existing directory
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (ext3)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (ext4)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (xfs)][Slow] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (block volmode)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (ext3)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (ext4)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (xfs)][Slow] volumes should store data
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works, check deletion (ephemeral)
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)
Kubernetes e2e suite [It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works, check deletion (ephemeral)
Kubernetes e2e suite [It] [sig-storage] CSI mock volume CSI CSIDriver deployment after pod creation using non-attachable mock driver should bringup pod after deploying CSIDriver attach=false [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI mock volume CSI NodeStage error cases [Slow] should call NodeUnstage after NodeStage ephemeral error
Kubernetes e2e suite [It] [sig-storage] CSI mock volume CSI NodeStage error cases [Slow] should call NodeUnstage after NodeStage success
Kubernetes e2e suite [It] [sig-storage] CSI mock volume CSI NodeStage error cases [Slow] should not call NodeUnstage after NodeStage final error
Kubernetes e2e suite [It] [sig-storage] CSI mock volume CSI NodeStage error cases [Slow] should retry NodeStage after NodeStage ephemeral error
Kubernetes e2e suite [It] [sig-storage] CSI mock volume CSI NodeStage error cases [Slow] should retry NodeStage after NodeStage final error
Kubernetes e2e suite [It] [sig-storage] CSI mock volume CSI NodeUnstage error cases [Slow] should call NodeStage after NodeUnstage success
Kubernetes e2e suite [It] [sig-storage] CSI mock volume CSI NodeUnstage error cases [Slow] two pods: should call NodeStage after previous NodeUnstage final error
Kubernetes e2e suite [It] [sig-storage] CSI mock volume CSI NodeUnstage error cases [Slow] two pods: should call NodeStage after previous NodeUnstage transient error
Kubernetes e2e suite [It] [sig-storage] CSI mock volume CSI Snapshot Controller metrics [Feature:VolumeSnapshotDataSource] snapshot controller should emit dynamic CreateSnapshot, CreateSnapshotAndReady, and DeleteSnapshot metrics
Kubernetes e2e suite [It] [sig-storage] CSI mock volume CSI Snapshot Controller metrics [Feature:VolumeSnapshotDataSource] snapshot controller should emit pre-provisioned CreateSnapshot, CreateSnapshotAndReady, and DeleteSnapshot metrics
Kubernetes e2e suite [It] [sig-storage] CSI mock volume CSI Volume Snapshots [Feature:VolumeSnapshotDataSource] volumesnapshotcontent and pvc in Bound state with deletion timestamp set should not get deleted while snapshot finalizer exists
Kubernetes e2e suite [It] [sig-storage] CSI mock volume CSI Volume Snapshots secrets [Feature:VolumeSnapshotDataSource] volume snapshot create/delete with secrets
Kubernetes e2e suite [It] [sig-storage] CSI mock volume CSI volume limit information using mock driver should report attach limit for generic ephemeral volume when persistent volume is attached [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI mock volume CSI volume limit information using mock driver should report attach limit for persistent volume when generic ephemeral volume is attached [Slow]
Kubernetes e2e suite [It] [sig-storage] CSI mock volume CSI volume limit information using mock driver should report attach limit when limit is bigger than 0 [Slow]
Kubernetes e2e suite [It] [sig-storage] ConfigMap Should fail non-optional pod creation due to configMap object does not exist [Slow]
Kubernetes e2e suite [It] [sig-storage] ConfigMap Should fail non-optional pod creation due to the key in the configMap object does not exist [Slow]
Kubernetes e2e suite [It] [sig-storage] Downward API [Serial] [Disruptive] [Feature:EphemeralStorage] Downward API tests for local ephemeral storage should provide container's limits.ephemeral-storage and requests.ephemeral-storage as env vars
Kubernetes e2e suite [It] [sig-storage] Downward API [Serial] [Disruptive] [Feature:EphemeralStorage] Downward API tests for local ephemeral storage should provide default limits.ephemeral-storage from node allocatable
Kubernetes e2e suite [It] [sig-storage] Dynamic Provisioning DynamicProvisioner Default should be disabled by changing the default annotation [Serial] [Disruptive]
Kubernetes e2e suite [It] [sig-storage] Dynamic Provisioning DynamicProvisioner Default should be disabled by removing the default annotation [Serial] [Disruptive]
Kubernetes e2e suite [It] [sig-storage] Dynamic Provisioning DynamicProvisioner Default should create and delete default persistent volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] Dynamic Provisioning DynamicProvisioner External should let an external dynamic provisioner create and delete persistent volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] Dynamic Provisioning DynamicProvisioner [Slow] [Feature:StorageProvider] deletion should be idempotent
Kubernetes e2e suite [It] [sig-storage] Dynamic Provisioning DynamicProvisioner [Slow] [Feature:StorageProvider] should provision storage with different parameters
Kubernetes e2e suite [It] [sig-storage] Dynamic Provisioning DynamicProvisioner [Slow] [Feature:StorageProvider] should provision storage with non-default reclaim policy Retain
Kubernetes e2e suite [It] [sig-storage] Dynamic Provisioning DynamicProvisioner [Slow] [Feature:StorageProvider] should test that deleting a claim before the volume is provisioned deletes the volume.
Kubernetes e2e suite [It] [sig-storage] Dynamic Provisioning GlusterDynamicProvisioner should create and delete persistent volumes [fast]
Kubernetes e2e suite [It] [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]
Kubernetes e2e suite [It] [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for git_repo [Serial] [Slow]
Kubernetes e2e suite [It] [sig-storage] Flexvolumes should be mountable when attachable [Feature:Flexvolumes]
Kubernetes e2e suite [It] [sig-storage] Flexvolumes should be mountable when non-attachable
Kubernetes e2e suite [It] [sig-storage] GKE local SSD [Feature:GKELocalSSD] should write and read from node local SSD [Feature:GKELocalSSD]
Kubernetes e2e suite [It] [sig-storage] GenericPersistentVolume[Disruptive] When kubelet restarts Should test that a file written to the mount before kubelet restart is readable after restart.
Kubernetes e2e suite [It] [sig-storage] GenericPersistentVolume[Disruptive] When kubelet restarts Should test that a volume mounted to a pod that is deleted while the kubelet is down unmounts when the kubelet returns.
Kubernetes e2e suite [It] [sig-storage] GenericPersistentVolume[Disruptive] When kubelet restarts Should test that a volume mounted to a pod that is force deleted while the kubelet is down unmounts when the kubelet returns.
Kubernetes e2e suite [It] [sig-storage] HostPathType Block Device [Slow] Should be able to mount block device 'ablkdev' successfully when HostPathType is HostPathBlockDev
Kubernetes e2e suite [It] [sig-storage] HostPathType Block Device [Slow] Should be able to mount block device 'ablkdev' successfully when HostPathType is HostPathUnset
Kubernetes e2e suite [It] [sig-storage] HostPathType Block Device [Slow] Should fail on mounting block device 'ablkdev' when HostPathType is HostPathCharDev
Kubernetes e2e suite [It] [sig-storage] HostPathType Block Device [Slow] Should fail on mounting block device 'ablkdev' when HostPathType is HostPathDirectory
Kubernetes e2e suite [It] [sig-storage] HostPathType Block Device [Slow] Should fail on mounting block device 'ablkdev' when HostPathType is HostPathFile
Kubernetes e2e suite [It] [sig-storage] HostPathType Block Device [Slow] Should fail on mounting block device 'ablkdev' when HostPathType is HostPathSocket
Kubernetes e2e suite [It] [sig-storage] HostPathType Block Device [Slow] Should fail on mounting non-existent block device 'does-not-exist-blk-dev' when HostPathType is HostPathBlockDev
Kubernetes e2e suite [It] [sig-storage] HostPathType Character Device [Slow] Should be able to mount character device 'achardev' successfully when HostPathType is HostPathCharDev
Kubernetes e2e suite [It] [sig-storage] HostPathType Character Device [Slow] Should be able to mount character device 'achardev' successfully when HostPathType is HostPathUnset
Kubernetes e2e suite [It] [sig-storage] HostPathType Character Device [Slow] Should fail on mounting character device 'achardev' when HostPathType is HostPathBlockDev
Kubernetes e2e suite [It] [sig-storage] HostPathType Character Device [Slow] Should fail on mounting character device 'achardev' when HostPathType is HostPathDirectory
Kubernetes e2e suite [It] [sig-storage] HostPathType Character Device [Slow] Should fail on mounting character device 'achardev' when HostPathType is HostPathFile
Kubernetes e2e suite [It] [sig-storage] HostPathType Character Device [Slow] Should fail on mounting character device 'achardev' when HostPathType is HostPathSocket
Kubernetes e2e suite [It] [sig-storage] HostPathType Character Device [Slow] Should fail on mounting non-existent character device 'does-not-exist-char-dev' when HostPathType is HostPathCharDev
Kubernetes e2e suite [It] [sig-storage] HostPathType Directory [Slow] Should be able to mount directory 'adir' successfully when HostPathType is HostPathDirectory
Kubernetes e2e suite [It] [sig-storage] HostPathType Directory [Slow] Should be able to mount directory 'adir' successfully when HostPathType is HostPathUnset
Kubernetes e2e suite [It] [sig-storage] HostPathType Directory [Slow] Should fail on mounting directory 'adir' when HostPathType is HostPathBlockDev
Kubernetes e2e suite [It] [sig-storage] HostPathType Directory [Slow] Should fail on mounting directory 'adir' when HostPathType is HostPathCharDev
Kubernetes e2e suite [It] [sig-storage] HostPathType Directory [Slow] Should fail on mounting directory 'adir' when HostPathType is HostPathFile
Kubernetes e2e suite [It] [sig-storage] HostPathType Directory [Slow] Should fail on mounting directory 'adir' when HostPathType is HostPathSocket
Kubernetes e2e suite [It] [sig-storage] HostPathType Directory [Slow] Should fail on mounting non-existent directory 'does-not-exist-dir' when HostPathType is HostPathDirectory
Kubernetes e2e suite [It] [sig-storage] HostPathType File [Slow] Should be able to mount file 'afile' successfully when HostPathType is HostPathFile
Kubernetes e2e suite [It] [sig-storage] HostPathType File [Slow] Should be able to mount file 'afile' successfully when HostPathType is HostPathUnset
Kubernetes e2e suite [It] [sig-storage] HostPathType File [Slow] Should fail on mounting file 'afile' when HostPathType is HostPathBlockDev
Kubernetes e2e suite [It] [sig-storage] HostPathType File [Slow] Should fail on mounting file 'afile' when HostPathType is HostPathCharDev
Kubernetes e2e suite [It] [sig-storage] HostPathType File [Slow] Should fail on mounting file 'afile' when HostPathType is HostPathDirectory
Kubernetes e2e suite [It] [sig-storage] HostPathType File [Slow] Should fail on mounting file 'afile' when HostPathType is HostPathSocket
Kubernetes e2e suite [It] [sig-storage] HostPathType File [Slow] Should fail on mounting non-existent file 'does-not-exist-file' when HostPathType is HostPathFile
Kubernetes e2e suite [It] [sig-storage] HostPathType Socket [Slow] Should be able to mount socket 'asocket' successfully when HostPathType is HostPathSocket
Kubernetes e2e suite [It] [sig-storage] HostPathType Socket [Slow] Should be able to mount socket 'asocket' successfully when HostPathType is HostPathUnset
Kubernetes e2e suite [It] [sig-storage] HostPathType Socket [Slow] Should fail on mounting non-existent socket 'does-not-exist-socket' when HostPathType is HostPathSocket
Kubernetes e2e suite [It] [sig-storage] HostPathType Socket [Slow] Should fail on mounting socket 'asocket' when HostPathType is HostPathBlockDev
Kubernetes e2e suite [It] [sig-storage] HostPathType Socket [Slow] Should fail on mounting socket 'asocket' when HostPathType is HostPathCharDev
Kubernetes e2e suite [It] [sig-storage] HostPathType Socket [Slow] Should fail on mounting socket 'asocket' when HostPathType is HostPathDirectory
Kubernetes e2e suite [It] [sig-storage] HostPathType Socket [Slow] Should fail on mounting socket 'asocket' when HostPathType is HostPathFile
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] provisioning should mount multiple PV pointing to the same storage on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with any volume data source [Serial]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with mount options
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with pvc data source
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with pvc data source in parallel [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] volume-stress multiple pods should access different volumes repeatedly [Slow] [Serial]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] capacity provides storage capacity information
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with different fsgroup applied to the volume contents
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with same fsgroup applied to the volume contents
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with different fsgroup applied to the volume contents
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with same fsgroup skips ownership changes to the volume contents
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] provisioning should mount multiple PV pointing to the same storage on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with any volume data source [Serial]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with mount options
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source in parallel [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] volume-stress multiple pods should access different volumes repeatedly [Slow] [Serial]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (delayed binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (delayed binding)] topology should provision a volume and schedule a pod with AllowedTopologies
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ext3)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ext4)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits should support volume limits [Serial]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits should verify that all csinodes have volume limits
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (immediate binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (immediate binding)] topology should provision a volume and schedule a pod with AllowedTopologies
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should mount multiple PV pointing to the same storage on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with any volume data source [Serial]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with mount options
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with pvc data source
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with pvc data source in parallel [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support existing directory
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support non-existent path
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (xfs)][Slow] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should create read-only inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should create read/write inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should support expansion of pvcs created for ephemeral pvcs
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should support multiple inline ephemeral volumes
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should support two pods which have the same volume definition
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should create read-only inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should create read/write inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should support expansion of pvcs created for ephemeral pvcs
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should support multiple inline ephemeral volumes
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should support two pods which have the same volume definition
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should create read-only inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should create read/write inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support expansion of pvcs created for ephemeral pvcs
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support multiple inline ephemeral volumes
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support two pods which have the same volume definition
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Generic Ephemeral-volume (default fs)] volumeLimits should support volume limits [Serial]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Generic Ephemeral-volume (default fs)] volumeLimits should verify that all csinodes have volume limits
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should support existing directory
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (ext3)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (ext4)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (xfs)][Slow] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (ext3)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (ext4)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (xfs)][Slow] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (block volmode)] provisioning should mount multiple PV pointing to the same storage on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with any volume data source [Serial]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with mount options
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with pvc data source
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with pvc data source in parallel [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (block volmode)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (block volmode)] volume-stress multiple pods should access different volumes repeatedly [Slow] [Serial]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (block volmode)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] capacity provides storage capacity information
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with different fsgroup applied to the volume contents
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with same fsgroup applied to the volume contents
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with different fsgroup applied to the volume contents
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with same fsgroup skips ownership changes to the volume contents
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] provisioning should mount multiple PV pointing to the same storage on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with any volume data source [Serial]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with mount options
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source in parallel [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] volume-stress multiple pods should access different volumes repeatedly [Slow] [Serial]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (default fs)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (delayed binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (delayed binding)] topology should provision a volume and schedule a pod with AllowedTopologies
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ext3)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ext4)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits should support volume limits [Serial]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits should verify that all csinodes have volume limits
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (immediate binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (immediate binding)] topology should provision a volume and schedule a pod with AllowedTopologies
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should mount multiple PV pointing to the same storage on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with any volume data source [Serial]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with mount options
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with pvc data source
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with pvc data source in parallel [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support existing directory
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support non-existent path
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Dynamic PV (xfs)][Slow] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should create read-only inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should create read/write inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should support expansion of pvcs created for ephemeral pvcs
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should support multiple inline ephemeral volumes
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should support two pods which have the same volume definition
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should create read-only inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should create read/write inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should support expansion of pvcs created for ephemeral pvcs
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should support multiple inline ephemeral volumes
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should support two pods which have the same volume definition
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should create read-only inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should create read/write inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support expansion of pvcs created for ephemeral pvcs
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support multiple inline ephemeral volumes
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support two pods which have the same volume definition
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Generic Ephemeral-volume (default fs)] volumeLimits should support volume limits [Serial]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Generic Ephemeral-volume (default fs)] volumeLimits should verify that all csinodes have volume limits
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (default fs)] subPath should support existing directory
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (default fs)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (ext3)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (ext4)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Inline-volume (xfs)][Slow] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (block volmode)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (ext3)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (ext4)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-disk] [Testpattern: Pre-provisioned PV (xfs)][Slow] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (block volmode)] provisioning should mount multiple PV pointing to the same storage on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with any volume data source [Serial]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with mount options
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with pvc data source
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with pvc data source in parallel [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (block volmode)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (block volmode)] volume-stress multiple pods should access different volumes repeatedly [Slow] [Serial]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (block volmode)] volumeMode should fail in binding dynamic provisioned PV to PVC [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (block volmode)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] capacity provides storage capacity information
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with different fsgroup applied to the volume contents
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with same fsgroup applied to the volume contents
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with different fsgroup applied to the volume contents
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with same fsgroup skips ownership changes to the volume contents
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] provisioning should mount multiple PV pointing to the same storage on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with any volume data source [Serial]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with mount options
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source in parallel [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] volume-stress multiple pods should access different volumes repeatedly [Slow] [Serial]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (default fs)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (delayed binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (delayed binding)] topology should provision a volume and schedule a pod with AllowedTopologies
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ext3)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ext4)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits should support volume limits [Serial]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits should verify that all csinodes have volume limits
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (immediate binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (immediate binding)] topology should provision a volume and schedule a pod with AllowedTopologies
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should mount multiple PV pointing to the same storage on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with any volume data source [Serial]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with mount options
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with pvc data source
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with pvc data source in parallel [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support existing directory
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support non-existent path
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Dynamic PV (xfs)][Slow] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should create read-only inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should create read/write inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should support expansion of pvcs created for ephemeral pvcs
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should support multiple inline ephemeral volumes
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should support two pods which have the same volume definition
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should create read-only inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should create read/write inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should support expansion of pvcs created for ephemeral pvcs
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should support multiple inline ephemeral volumes
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should support two pods which have the same volume definition
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should create read-only inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should create read/write inline ephemeral volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support expansion of pvcs created for ephemeral pvcs
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support multiple inline ephemeral volumes
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support two pods which have the same volume definition
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Generic Ephemeral-volume (default fs)] volumeLimits should support volume limits [Serial]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Generic Ephemeral-volume (default fs)] volumeLimits should verify that all csinodes have volume limits
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Inline-volume (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Inline-volume (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Inline-volume (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Inline-volume (default fs)] subPath should support existing directory
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Inline-volume (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Inline-volume (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Inline-volume (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Inline-volume (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Inline-volume (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Inline-volume (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Inline-volume (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Inline-volume (default fs)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Inline-volume (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Inline-volume (ext3)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Inline-volume (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Inline-volume (ext4)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Inline-volume (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Inline-volume (xfs)][Slow] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should fail to create pod by failing to mount volume [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (block volmode)] volumes should store data
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [It] [sig-storage] In-tree Volumes [Driver: azure-file] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath with backst