Result | FAILURE |
Tests | 24 failed / 872 succeeded |
Started | |
Elapsed | 33m35s |
Revision | master |
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\sExternal\sStorage\s\[Driver\:\sebs\.csi\.aws\.com\]\s\[Testpattern\:\sDynamic\sPV\s\(block\svolmode\)\(allowExpansion\)\]\svolume\-expand\sVerify\sif\soffline\sPVC\sexpansion\sworks$'
test/e2e/storage/testsuites/volume_expand.go:238 k8s.io/kubernetes/test/e2e/storage/testsuites.(*volumeExpandTestSuite).DefineTests.func4() test/e2e/storage/testsuites/volume_expand.go:238 +0xe5a There were additional failures detected after the initial failure: [FAILED] Jan 20 17:15:48.937: failed to list events in namespace "volume-expand-6679": Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/volume-expand-6679/events": dial tcp 100.26.139.144:443: connect: connection refused In [DeferCleanup (Each)] at: test/e2e/framework/debug/dump.go:44 ---------- [FAILED] Jan 20 17:15:48.978: Couldn't delete ns: "volume-expand-6679": Delete "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/volume-expand-6679": dial tcp 100.26.139.144:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/volume-expand-6679", Err:(*net.OpError)(0xc004d8a500)}) In [DeferCleanup (Each)] at: test/e2e/framework/framework.go:370from junit_01.xml
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand set up framework | framework.go:178 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/20/23 17:14:38.234�[0m Jan 20 17:14:38.234: INFO: >>> kubeConfig: /root/.kube/config �[1mSTEP:�[0m Building a namespace api object, basename volume-expand �[38;5;243m01/20/23 17:14:38.235�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/20/23 17:14:38.332�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/20/23 17:14:38.394�[0m [BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand test/e2e/framework/metrics/init/init.go:31 [It] Verify if offline PVC expansion works test/e2e/storage/testsuites/volume_expand.go:172 Jan 20 17:14:38.454: INFO: Creating resource for dynamic PV Jan 20 17:14:38.454: INFO: Using claimSize:1Gi, test suite supported size:{ 1Gi}, driver(ebs.csi.aws.com) supported size:{ 1Gi} �[1mSTEP:�[0m creating a StorageClass volume-expand-6679-e2e-scz75ww �[38;5;243m01/20/23 17:14:38.454�[0m �[1mSTEP:�[0m creating a claim �[38;5;243m01/20/23 17:14:38.487�[0m �[1mSTEP:�[0m Creating a pod with dynamically provisioned volume �[38;5;243m01/20/23 17:14:38.552�[0m Jan 20 17:14:38.585: INFO: Waiting up to 5m0s for pod "pod-c940fbe1-b922-428f-a253-841ff9f56573" in namespace "volume-expand-6679" to be "running" Jan 20 17:14:38.615: INFO: Pod "pod-c940fbe1-b922-428f-a253-841ff9f56573": Phase="Pending", Reason="", readiness=false. Elapsed: 30.504006ms Jan 20 17:14:40.659: INFO: Pod "pod-c940fbe1-b922-428f-a253-841ff9f56573": Phase="Pending", Reason="", readiness=false. Elapsed: 2.074603633s Jan 20 17:14:42.649: INFO: Pod "pod-c940fbe1-b922-428f-a253-841ff9f56573": Phase="Pending", Reason="", readiness=false. Elapsed: 4.064556057s Jan 20 17:14:44.648: INFO: Pod "pod-c940fbe1-b922-428f-a253-841ff9f56573": Phase="Pending", Reason="", readiness=false. Elapsed: 6.06357629s Jan 20 17:14:46.648: INFO: Pod "pod-c940fbe1-b922-428f-a253-841ff9f56573": Phase="Pending", Reason="", readiness=false. Elapsed: 8.063220123s Jan 20 17:14:48.648: INFO: Pod "pod-c940fbe1-b922-428f-a253-841ff9f56573": Phase="Pending", Reason="", readiness=false. Elapsed: 10.062908684s Jan 20 17:14:50.651: INFO: Pod "pod-c940fbe1-b922-428f-a253-841ff9f56573": Phase="Pending", Reason="", readiness=false. Elapsed: 12.066296092s Jan 20 17:14:52.649: INFO: Pod "pod-c940fbe1-b922-428f-a253-841ff9f56573": Phase="Pending", Reason="", readiness=false. Elapsed: 14.064550868s Jan 20 17:14:54.787: INFO: Pod "pod-c940fbe1-b922-428f-a253-841ff9f56573": Phase="Pending", Reason="", readiness=false. Elapsed: 16.201953876s Jan 20 17:14:56.647: INFO: Pod "pod-c940fbe1-b922-428f-a253-841ff9f56573": Phase="Pending", Reason="", readiness=false. Elapsed: 18.06235768s Jan 20 17:14:58.647: INFO: Pod "pod-c940fbe1-b922-428f-a253-841ff9f56573": Phase="Running", Reason="", readiness=true. Elapsed: 20.062286907s Jan 20 17:14:58.647: INFO: Pod "pod-c940fbe1-b922-428f-a253-841ff9f56573" satisfied condition "running" �[1mSTEP:�[0m Deleting the previously created pod �[38;5;243m01/20/23 17:14:58.678�[0m Jan 20 17:14:58.678: INFO: Deleting pod "pod-c940fbe1-b922-428f-a253-841ff9f56573" in namespace "volume-expand-6679" Jan 20 17:14:58.718: INFO: Wait up to 5m0s for pod "pod-c940fbe1-b922-428f-a253-841ff9f56573" to be fully deleted �[1mSTEP:�[0m Expanding current pvc �[38;5;243m01/20/23 17:15:04.784�[0m Jan 20 17:15:04.784: INFO: currentPvcSize {{1073741824 0} {<nil>} 1Gi BinarySI}, newSize {{2147483648 0} {<nil>} BinarySI} �[1mSTEP:�[0m Waiting for cloudprovider resize to finish �[38;5;243m01/20/23 17:15:04.871�[0m �[1mSTEP:�[0m Checking for conditions on pvc �[38;5;243m01/20/23 17:15:12.951�[0m �[1mSTEP:�[0m Creating a new pod with same volume �[38;5;243m01/20/23 17:15:12.984�[0m Jan 20 17:15:13.028: INFO: Waiting up to 10m0s for pod "pod-d9b2c311-b86f-4135-a026-635f052e5073" in namespace "volume-expand-6679" to be "running" Jan 20 17:15:13.063: INFO: Pod "pod-d9b2c311-b86f-4135-a026-635f052e5073": Phase="Pending", Reason="", readiness=false. Elapsed: 34.337281ms Jan 20 17:15:15.096: INFO: Pod "pod-d9b2c311-b86f-4135-a026-635f052e5073": Phase="Pending", Reason="", readiness=false. Elapsed: 2.067466133s Jan 20 17:15:17.096: INFO: Pod "pod-d9b2c311-b86f-4135-a026-635f052e5073": Phase="Pending", Reason="", readiness=false. Elapsed: 4.067773633s Jan 20 17:15:19.095: INFO: Pod "pod-d9b2c311-b86f-4135-a026-635f052e5073": Phase="Pending", Reason="", readiness=false. Elapsed: 6.066132168s Jan 20 17:15:21.094: INFO: Pod "pod-d9b2c311-b86f-4135-a026-635f052e5073": Phase="Pending", Reason="", readiness=false. Elapsed: 8.065942514s Jan 20 17:15:23.094: INFO: Pod "pod-d9b2c311-b86f-4135-a026-635f052e5073": Phase="Pending", Reason="", readiness=false. Elapsed: 10.065802879s Jan 20 17:15:25.097: INFO: Pod "pod-d9b2c311-b86f-4135-a026-635f052e5073": Phase="Pending", Reason="", readiness=false. Elapsed: 12.068703229s Jan 20 17:15:27.094: INFO: Pod "pod-d9b2c311-b86f-4135-a026-635f052e5073": Phase="Pending", Reason="", readiness=false. Elapsed: 14.065684365s Jan 20 17:15:48.559: INFO: Encountered non-retryable error while getting pod volume-expand-6679/pod-d9b2c311-b86f-4135-a026-635f052e5073: Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/volume-expand-6679/pods/pod-d9b2c311-b86f-4135-a026-635f052e5073": dial tcp 100.26.139.144:443: connect: connection refused - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=313, ErrCode=NO_ERROR, debug="" Jan 20 17:15:48.559: INFO: Unexpected error: while recreating pod for resizing: <*errors.errorString | 0xc00141c840>: { s: "pod \"pod-d9b2c311-b86f-4135-a026-635f052e5073\" is not Running: error while waiting for pod volume-expand-6679/pod-d9b2c311-b86f-4135-a026-635f052e5073 to be running: Get \"https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/volume-expand-6679/pods/pod-d9b2c311-b86f-4135-a026-635f052e5073\": dial tcp 100.26.139.144:443: connect: connection refused - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=313, ErrCode=NO_ERROR, debug=\"\"", } Jan 20 17:15:48.559: FAIL: while recreating pod for resizing: pod "pod-d9b2c311-b86f-4135-a026-635f052e5073" is not Running: error while waiting for pod volume-expand-6679/pod-d9b2c311-b86f-4135-a026-635f052e5073 to be running: Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/volume-expand-6679/pods/pod-d9b2c311-b86f-4135-a026-635f052e5073": dial tcp 100.26.139.144:443: connect: connection refused - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=313, ErrCode=NO_ERROR, debug="" Full Stack Trace k8s.io/kubernetes/test/e2e/storage/testsuites.(*volumeExpandTestSuite).DefineTests.func4() test/e2e/storage/testsuites/volume_expand.go:238 +0xe5a Jan 20 17:15:48.559: INFO: Deleting pod "pod-d9b2c311-b86f-4135-a026-635f052e5073" in namespace "volume-expand-6679" Jan 20 17:15:48.610: INFO: Unexpected error: while cleaning up pod before exiting resizing test: <*errors.errorString | 0xc00139c7a0>: { s: "pod Delete API error: Delete \"https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/volume-expand-6679/pods/pod-d9b2c311-b86f-4135-a026-635f052e5073\": dial tcp 100.26.139.144:443: connect: connection refused", } Jan 20 17:15:48.610: FAIL: while cleaning up pod before exiting resizing test: pod Delete API error: Delete "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/volume-expand-6679/pods/pod-d9b2c311-b86f-4135-a026-635f052e5073": dial tcp 100.26.139.144:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/storage/testsuites.(*volumeExpandTestSuite).DefineTests.func4.2() test/e2e/storage/testsuites/volume_expand.go:236 +0xae panic({0x70efe60, 0xc004be6b60}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/test/e2e/framework.Fail({0xc003e23d40, 0x22a}, {0xc004735a10?, 0xc003e23200?, 0xc004735a38?}) test/e2e/framework/log.go:61 +0x145 k8s.io/kubernetes/test/e2e/framework.ExpectNoErrorWithOffset(0x1, {0x7fa8f20, 0xc00141c840}, {0xc00141c850?, 0x0?, 0x0?}) test/e2e/framework/expect.go:76 +0x267 k8s.io/kubernetes/test/e2e/framework.ExpectNoError(...) test/e2e/framework/expect.go:43 k8s.io/kubernetes/test/e2e/storage/testsuites.(*volumeExpandTestSuite).DefineTests.func4() test/e2e/storage/testsuites/volume_expand.go:238 +0xe5a Jan 20 17:15:48.611: INFO: Deleting pod "pod-c940fbe1-b922-428f-a253-841ff9f56573" in namespace "volume-expand-6679" Jan 20 17:15:48.650: INFO: Unexpected error: while cleaning up pod already deleted in resize test: <*errors.errorString | 0xc0013f27c0>: { s: "pod Delete API error: Delete \"https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/volume-expand-6679/pods/pod-c940fbe1-b922-428f-a253-841ff9f56573\": dial tcp 100.26.139.144:443: connect: connection refused", } Jan 20 17:15:48.650: FAIL: while cleaning up pod already deleted in resize test: pod Delete API error: Delete "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/volume-expand-6679/pods/pod-c940fbe1-b922-428f-a253-841ff9f56573": dial tcp 100.26.139.144:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/storage/testsuites.(*volumeExpandTestSuite).DefineTests.func4.1() test/e2e/storage/testsuites/volume_expand.go:192 +0xae panic({0x70efe60, 0xc00091b2d0}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/test/e2e/framework.Fail({0xc000352c80, 0x12b}, {0xc004735600?, 0xc003a8f200?, 0xc004735628?}) test/e2e/framework/log.go:61 +0x145 k8s.io/kubernetes/test/e2e/framework.ExpectNoErrorWithOffset(0x1, {0x7fa8f20, 0xc00139c7a0}, {0xc00139c7b0?, 0x0?, 0x0?}) test/e2e/framework/expect.go:76 +0x267 k8s.io/kubernetes/test/e2e/framework.ExpectNoError(...) test/e2e/framework/expect.go:43 k8s.io/kubernetes/test/e2e/storage/testsuites.(*volumeExpandTestSuite).DefineTests.func4.2() test/e2e/storage/testsuites/volume_expand.go:236 +0xae panic({0x70efe60, 0xc004be6b60}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/test/e2e/framework.Fail({0xc003e23d40, 0x22a}, {0xc004735a10?, 0xc003e23200?, 0xc004735a38?}) test/e2e/framework/log.go:61 +0x145 k8s.io/kubernetes/test/e2e/framework.ExpectNoErrorWithOffset(0x1, {0x7fa8f20, 0xc00141c840}, {0xc00141c850?, 0x0?, 0x0?}) test/e2e/framework/expect.go:76 +0x267 k8s.io/kubernetes/test/e2e/framework.ExpectNoError(...) test/e2e/framework/expect.go:43 k8s.io/kubernetes/test/e2e/storage/testsuites.(*volumeExpandTestSuite).DefineTests.func4() test/e2e/storage/testsuites/volume_expand.go:238 +0xe5a �[1mSTEP:�[0m Deleting pod �[38;5;243m01/20/23 17:15:48.65�[0m Jan 20 17:15:48.650: INFO: Deleting pod "pod-c940fbe1-b922-428f-a253-841ff9f56573" in namespace "volume-expand-6679" �[1mSTEP:�[0m Deleting pod2 �[38;5;243m01/20/23 17:15:48.691�[0m Jan 20 17:15:48.692: INFO: Deleting pod "pod-d9b2c311-b86f-4135-a026-635f052e5073" in namespace "volume-expand-6679" �[1mSTEP:�[0m Deleting pvc �[38;5;243m01/20/23 17:15:48.731�[0m Jan 20 17:15:48.771: INFO: Deleting PersistentVolumeClaim "ebs.csi.aws.comm9w25" �[1mSTEP:�[0m Deleting sc �[38;5;243m01/20/23 17:15:48.814�[0m Jan 20 17:15:48.855: INFO: Unexpected error: while cleaning up resource: <errors.aggregate | len:3, cap:4>: [ <*errors.errorString | 0xc00139ceb0>{ s: "pod Delete API error: Delete \"https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/volume-expand-6679/pods/pod-c940fbe1-b922-428f-a253-841ff9f56573\": dial tcp 100.26.139.144:443: connect: connection refused", }, <*errors.errorString | 0xc00141d180>{ s: "pod Delete API error: Delete \"https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/volume-expand-6679/pods/pod-d9b2c311-b86f-4135-a026-635f052e5073\": dial tcp 100.26.139.144:443: connect: connection refused", }, <errors.aggregate | len:3, cap:4>[ <*fmt.wrapError | 0xc004c66240>{ msg: "failed to find PVC ebs.csi.aws.comm9w25: Get \"https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/volume-expand-6679/persistentvolumeclaims/ebs.csi.aws.comm9w25\": dial tcp 100.26.139.144:443: connect: connection refused", err: <*url.Error | 0xc004ddb110>{ Op: "Get", URL: "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/volume-expand-6679/persistentvolumeclaims/ebs.csi.aws.comm9w25", Err: <*net.OpError | 0xc0049d65f0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc00247c000>{ IP: [100, 26, 139, 144], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc004c66200>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, }, }, <*fmt.wrapError | 0xc003de47a0>{ msg: "failed to delete PVC ebs.csi.aws.comm9w25: PVC Delete API error: Delete \"https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/volume-expand-6679/persistentvolumeclaims/ebs.csi.aws.comm9w25\": dial tcp 100.26.139.144:443: connect: connection refused", err: <*errors.errorString | 0xc00139d7c0>{ s: "PVC Delete API error: Delete \"https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/volume-expand-6679/persistentvolumeclaims/ebs.csi.aws.comm9w25\": dial tcp 100.26.139.144:443: connect: connection refused", }, }, <*fmt.wrapError | 0xc003de4920>{ msg: "failed to delete StorageClass volume-expand-6679-e2e-scz75ww: Delete \"https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/apis/storage.k8s.io/v1/storageclasses/volume-expand-6679-e2e-scz75ww\": dial tcp 100.26.139.144:443: connect: connection refused", err: <*url.Error | 0xc00247d110>{ Op: "Delete", URL: "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/apis/storage.k8s.io/v1/storageclasses/volume-expand-6679-e2e-scz75ww", Err: <*net.OpError | 0xc0033e4320>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc001711bc0>{ IP: [100, 26, 139, 144], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc003de48e0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, }, }, ], ] Jan 20 17:15:48.855: FAIL: while cleaning up resource: [pod Delete API error: Delete "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/volume-expand-6679/pods/pod-c940fbe1-b922-428f-a253-841ff9f56573": dial tcp 100.26.139.144:443: connect: connection refused, pod Delete API error: Delete "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/volume-expand-6679/pods/pod-d9b2c311-b86f-4135-a026-635f052e5073": dial tcp 100.26.139.144:443: connect: connection refused, failed to find PVC ebs.csi.aws.comm9w25: Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/volume-expand-6679/persistentvolumeclaims/ebs.csi.aws.comm9w25": dial tcp 100.26.139.144:443: connect: connection refused, failed to delete PVC ebs.csi.aws.comm9w25: PVC Delete API error: Delete "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/volume-expand-6679/persistentvolumeclaims/ebs.csi.aws.comm9w25": dial tcp 100.26.139.144:443: connect: connection refused, failed to delete StorageClass volume-expand-6679-e2e-scz75ww: Delete "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/apis/storage.k8s.io/v1/storageclasses/volume-expand-6679-e2e-scz75ww": dial tcp 100.26.139.144:443: connect: connection refused] Full Stack Trace k8s.io/kubernetes/test/e2e/storage/testsuites.(*volumeExpandTestSuite).DefineTests.func2() test/e2e/storage/testsuites/volume_expand.go:150 +0x3a6 panic({0x70efe60, 0xc004dcc620}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/test/e2e/framework.Fail({0xc002444140, 0x12d}, {0xc0047351f0?, 0xc000011b00?, 0xc004735218?}) test/e2e/framework/log.go:61 +0x145 k8s.io/kubernetes/test/e2e/framework.ExpectNoErrorWithOffset(0x1, {0x7fa8f20, 0xc0013f27c0}, {0xc0013f27d0?, 0x7f94380?, 0x0?}) test/e2e/framework/expect.go:76 +0x267 k8s.io/kubernetes/test/e2e/framework.ExpectNoError(...) test/e2e/framework/expect.go:43 k8s.io/kubernetes/test/e2e/storage/testsuites.(*volumeExpandTestSuite).DefineTests.func4.1() test/e2e/storage/testsuites/volume_expand.go:192 +0xae panic({0x70efe60, 0xc00091b2d0}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/test/e2e/framework.Fail({0xc000352c80, 0x12b}, {0xc004735600?, 0xc003a8f200?, 0xc004735628?}) test/e2e/framework/log.go:61 +0x145 k8s.io/kubernetes/test/e2e/framework.ExpectNoErrorWithOffset(0x1, {0x7fa8f20, 0xc00139c7a0}, {0xc00139c7b0?, 0x0?, 0x0?}) test/e2e/framework/expect.go:76 +0x267 k8s.io/kubernetes/test/e2e/framework.ExpectNoError(...) test/e2e/framework/expect.go:43 k8s.io/kubernetes/test/e2e/storage/testsuites.(*volumeExpandTestSuite).DefineTests.func4.2() test/e2e/storage/testsuites/volume_expand.go:236 +0xae panic({0x70efe60, 0xc004be6b60}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/test/e2e/framework.Fail({0xc003e23d40, 0x22a}, {0xc004735a10?, 0xc003e23200?, 0xc004735a38?}) test/e2e/framework/log.go:61 +0x145 k8s.io/kubernetes/test/e2e/framework.ExpectNoErrorWithOffset(0x1, {0x7fa8f20, 0xc00141c840}, {0xc00141c850?, 0x0?, 0x0?}) test/e2e/framework/expect.go:76 +0x267 k8s.io/kubernetes/test/e2e/framework.ExpectNoError(...) test/e2e/framework/expect.go:43 k8s.io/kubernetes/test/e2e/storage/testsuites.(*volumeExpandTestSuite).DefineTests.func4() test/e2e/storage/testsuites/volume_expand.go:238 +0xe5a [AfterEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand test/e2e/framework/node/init/init.go:32 Jan 20 17:15:48.855: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand dump namespaces | framework.go:196 �[1mSTEP:�[0m dump namespace information after failure �[38;5;243m01/20/23 17:15:48.896�[0m �[1mSTEP:�[0m Collecting events from namespace "volume-expand-6679". �[38;5;243m01/20/23 17:15:48.896�[0m Jan 20 17:15:48.937: INFO: Unexpected error: failed to list events in namespace "volume-expand-6679": <*url.Error | 0xc003e24f00>: { Op: "Get", URL: "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/volume-expand-6679/events", Err: <*net.OpError | 0xc0049d6ff0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc002ab2120>{ IP: [100, 26, 139, 144], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc004c66600>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Jan 20 17:15:48.937: FAIL: failed to list events in namespace "volume-expand-6679": Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/volume-expand-6679/events": dial tcp 100.26.139.144:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework/debug.dumpEventsInNamespace(0xc004e465c0, {0xc004b9aaf8, 0x12}) test/e2e/framework/debug/dump.go:44 +0x191 k8s.io/kubernetes/test/e2e/framework/debug.DumpAllNamespaceInfo({0x8022ee8, 0xc004b8a680}, {0xc004b9aaf8, 0x12}) test/e2e/framework/debug/dump.go:62 +0x8d k8s.io/kubernetes/test/e2e/framework/debug/init.init.0.func1.1(0xc004e46650?, {0xc004b9aaf8?, 0x7fac780?}) test/e2e/framework/debug/init/init.go:34 +0x32 k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo.func1() test/e2e/framework/framework.go:274 +0x6d k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo(0xc0016fa1e0) test/e2e/framework/framework.go:271 +0x179 reflect.Value.call({0x662c060?, 0xc0013c54d0?, 0xc004895fb0?}, {0x75bb752, 0x4}, {0xae7b420, 0x0, 0xc0031c08a8?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x662c060?, 0xc0013c54d0?, 0x2946afc?}, {0xae7b420?, 0xc004895f80?, 0xc003e18820?}) /usr/local/go/src/reflect/value.go:368 +0xbc [DeferCleanup (Each)] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand tear down framework | framework.go:193 �[1mSTEP:�[0m Destroying namespace "volume-expand-6679" for this suite. �[38;5;243m01/20/23 17:15:48.937�[0m Jan 20 17:15:48.978: FAIL: Couldn't delete ns: "volume-expand-6679": Delete "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/volume-expand-6679": dial tcp 100.26.139.144:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/volume-expand-6679", Err:(*net.OpError)(0xc004d8a500)}) Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach.func1() test/e2e/framework/framework.go:370 +0x4fe k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc0016fa1e0) test/e2e/framework/framework.go:383 +0x1ca reflect.Value.call({0x662c060?, 0xc0013c5420?, 0xc0049aef08?}, {0x75bb752, 0x4}, {0xae7b420, 0x0, 0xc0049c6330?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x662c060?, 0xc0013c5420?, 0x0?}, {0xae7b420?, 0x0?, 0x0?}) /usr/local/go/src/reflect/value.go:368 +0xbc
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-api\-machinery\]\sGarbage\scollector\sshould\sorphan\spods\screated\sby\src\sif\sdeleteOptions\.OrphanDependents\sis\snil$'
test/e2e/apimachinery/garbage_collector.go:462 k8s.io/kubernetes/test/e2e/apimachinery.glob..func13.3() test/e2e/apimachinery/garbage_collector.go:462 +0x2aa There were additional failures detected after the initial failure: [FAILED] Jan 20 17:15:48.644: failed to list events in namespace "gc-9502": Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/gc-9502/events": dial tcp 100.26.139.144:443: connect: connection refused In [DeferCleanup (Each)] at: test/e2e/framework/debug/dump.go:44 ---------- [FAILED] Jan 20 17:15:48.688: Couldn't delete ns: "gc-9502": Delete "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/gc-9502": dial tcp 100.26.139.144:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/gc-9502", Err:(*net.OpError)(0xc00501cf50)}) In [DeferCleanup (Each)] at: test/e2e/framework/framework.go:370from junit_01.xml
[BeforeEach] [sig-api-machinery] Garbage collector set up framework | framework.go:178 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/20/23 17:15:26.539�[0m Jan 20 17:15:26.539: INFO: >>> kubeConfig: /root/.kube/config �[1mSTEP:�[0m Building a namespace api object, basename gc �[38;5;243m01/20/23 17:15:26.541�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/20/23 17:15:26.63�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/20/23 17:15:26.686�[0m [BeforeEach] [sig-api-machinery] Garbage collector test/e2e/framework/metrics/init/init.go:31 [It] should orphan pods created by rc if deleteOptions.OrphanDependents is nil test/e2e/apimachinery/garbage_collector.go:439 �[1mSTEP:�[0m create the rc �[38;5;243m01/20/23 17:15:26.741�[0m Jan 20 17:15:48.560: FAIL: failed to wait for the rc.Status.Replicas to reach rc.Spec.Replicas: failed to get rc: Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/gc-9502/replicationcontrollers/simpletest.rc": dial tcp 100.26.139.144:443: connect: connection refused - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=503, ErrCode=NO_ERROR, debug="" Full Stack Trace k8s.io/kubernetes/test/e2e/apimachinery.glob..func13.3() test/e2e/apimachinery/garbage_collector.go:462 +0x2aa [AfterEach] [sig-api-machinery] Garbage collector test/e2e/framework/node/init/init.go:32 Jan 20 17:15:48.560: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-api-machinery] Garbage collector test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-api-machinery] Garbage collector dump namespaces | framework.go:196 �[1mSTEP:�[0m dump namespace information after failure �[38;5;243m01/20/23 17:15:48.603�[0m �[1mSTEP:�[0m Collecting events from namespace "gc-9502". �[38;5;243m01/20/23 17:15:48.603�[0m Jan 20 17:15:48.643: INFO: Unexpected error: failed to list events in namespace "gc-9502": <*url.Error | 0xc004343110>: { Op: "Get", URL: "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/gc-9502/events", Err: <*net.OpError | 0xc004c09ea0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc003c112f0>{ IP: [100, 26, 139, 144], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc0045dd060>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Jan 20 17:15:48.644: FAIL: failed to list events in namespace "gc-9502": Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/gc-9502/events": dial tcp 100.26.139.144:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework/debug.dumpEventsInNamespace(0xc0003cc5c0, {0xc001464690, 0x7}) test/e2e/framework/debug/dump.go:44 +0x191 k8s.io/kubernetes/test/e2e/framework/debug.DumpAllNamespaceInfo({0x8022ee8, 0xc0040b01a0}, {0xc001464690, 0x7}) test/e2e/framework/debug/dump.go:62 +0x8d k8s.io/kubernetes/test/e2e/framework/debug/init.init.0.func1.1(0xc0003cc650?, {0xc001464690?, 0x7fac780?}) test/e2e/framework/debug/init/init.go:34 +0x32 k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo.func1() test/e2e/framework/framework.go:274 +0x6d k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo(0xc0008aeff0) test/e2e/framework/framework.go:271 +0x179 reflect.Value.call({0x662c060?, 0xc0014c4b30?, 0xc00200cfb0?}, {0x75bb752, 0x4}, {0xae7b420, 0x0, 0xc004ac0228?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x662c060?, 0xc0014c4b30?, 0x2946afc?}, {0xae7b420?, 0xc00200cf80?, 0x756b170a01d0018a?}) /usr/local/go/src/reflect/value.go:368 +0xbc [DeferCleanup (Each)] [sig-api-machinery] Garbage collector tear down framework | framework.go:193 �[1mSTEP:�[0m Destroying namespace "gc-9502" for this suite. �[38;5;243m01/20/23 17:15:48.644�[0m Jan 20 17:15:48.688: FAIL: Couldn't delete ns: "gc-9502": Delete "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/gc-9502": dial tcp 100.26.139.144:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/gc-9502", Err:(*net.OpError)(0xc00501cf50)}) Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach.func1() test/e2e/framework/framework.go:370 +0x4fe k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc0008aeff0) test/e2e/framework/framework.go:383 +0x1ca reflect.Value.call({0x662c060?, 0xc0014c49f0?, 0x0?}, {0x75bb752, 0x4}, {0xae7b420, 0x0, 0x0?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x662c060?, 0xc0014c49f0?, 0x0?}, {0xae7b420?, 0x0?, 0x0?}) /usr/local/go/src/reflect/value.go:368 +0xbc
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-api\-machinery\]\sResourceQuota\sshould\sapply\schanges\sto\sa\sresourcequota\sstatus\s\[Conformance\]$'
test/e2e/apimachinery/resource_quota.go:1186 k8s.io/kubernetes/test/e2e/apimachinery.glob..func20.15.6() test/e2e/apimachinery/resource_quota.go:1186 +0x85 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2744911, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 +0x1b k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe5ba8?, 0xc000084048?}, 0x262c61f?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 +0x57 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe5ba8, 0xc000084048}, 0xc002709c20, 0x2fdd8ca?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 +0x10c k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe5ba8, 0xc000084048}, 0x18?, 0x2fdc465?, 0x30?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 +0x9a k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe5ba8, 0xc000084048}, 0xc00361b620?, 0xc004a24d68?, 0x262c967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 +0x4a k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc00066a900?, 0x0?, 0x0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 +0x50 k8s.io/kubernetes/test/e2e/apimachinery.glob..func20.15() test/e2e/apimachinery/resource_quota.go:1184 +0x4425 There were additional failures detected after the initial failure: [FAILED] Jan 20 17:15:48.665: failed to list events in namespace "resourcequota-6484": Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/resourcequota-6484/events": dial tcp 100.26.139.144:443: connect: connection refused In [DeferCleanup (Each)] at: test/e2e/framework/debug/dump.go:44 ---------- [FAILED] Jan 20 17:15:48.707: Couldn't delete ns: "resourcequota-6484": Delete "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/resourcequota-6484": dial tcp 100.26.139.144:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/resourcequota-6484", Err:(*net.OpError)(0xc001755d60)}) In [DeferCleanup (Each)] at: test/e2e/framework/framework.go:370from junit_01.xml
[BeforeEach] [sig-api-machinery] ResourceQuota set up framework | framework.go:178 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/20/23 17:14:19.649�[0m Jan 20 17:14:19.649: INFO: >>> kubeConfig: /root/.kube/config �[1mSTEP:�[0m Building a namespace api object, basename resourcequota �[38;5;243m01/20/23 17:14:19.65�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/20/23 17:14:19.743�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/20/23 17:14:19.802�[0m [BeforeEach] [sig-api-machinery] ResourceQuota test/e2e/framework/metrics/init/init.go:31 [It] should apply changes to a resourcequota status [Conformance] test/e2e/apimachinery/resource_quota.go:1010 �[1mSTEP:�[0m Creating resourceQuota "e2e-rq-status-pddjc" �[38;5;243m01/20/23 17:14:19.893�[0m Jan 20 17:14:19.955: INFO: Resource quota "e2e-rq-status-pddjc" reports spec: hard cpu limit of 500m Jan 20 17:14:19.955: INFO: Resource quota "e2e-rq-status-pddjc" reports spec: hard memory limit of 500Mi �[1mSTEP:�[0m Updating resourceQuota "e2e-rq-status-pddjc" /status �[38;5;243m01/20/23 17:14:19.955�[0m �[1mSTEP:�[0m Confirm /status for "e2e-rq-status-pddjc" resourceQuota via watch �[38;5;243m01/20/23 17:14:20.017�[0m Jan 20 17:14:20.046: INFO: observed resourceQuota "e2e-rq-status-pddjc" in namespace "resourcequota-6484" with hard status: v1.ResourceList(nil) Jan 20 17:14:20.046: INFO: Found resourceQuota "e2e-rq-status-pddjc" in namespace "resourcequota-6484" with hard status: v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:500, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"500m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:524288000, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"500Mi", Format:"BinarySI"}} Jan 20 17:14:20.046: INFO: ResourceQuota "e2e-rq-status-pddjc" /status was updated �[1mSTEP:�[0m Patching hard spec values for cpu & memory �[38;5;243m01/20/23 17:14:20.077�[0m Jan 20 17:14:20.116: INFO: Resource quota "e2e-rq-status-pddjc" reports spec: hard cpu limit of 1 Jan 20 17:14:20.116: INFO: Resource quota "e2e-rq-status-pddjc" reports spec: hard memory limit of 1Gi �[1mSTEP:�[0m Patching "e2e-rq-status-pddjc" /status �[38;5;243m01/20/23 17:14:20.116�[0m �[1mSTEP:�[0m Confirm /status for "e2e-rq-status-pddjc" resourceQuota via watch �[38;5;243m01/20/23 17:14:20.147�[0m Jan 20 17:14:20.177: INFO: observed resourceQuota "e2e-rq-status-pddjc" in namespace "resourcequota-6484" with hard status: v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:500, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"500m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:524288000, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"500Mi", Format:"BinarySI"}} Jan 20 17:14:20.177: INFO: Found resourceQuota "e2e-rq-status-pddjc" in namespace "resourcequota-6484" with hard status: v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:1, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}} Jan 20 17:14:20.177: INFO: ResourceQuota "e2e-rq-status-pddjc" /status was patched �[1mSTEP:�[0m Get "e2e-rq-status-pddjc" /status �[38;5;243m01/20/23 17:14:20.177�[0m Jan 20 17:14:20.208: INFO: Resourcequota "e2e-rq-status-pddjc" reports status: hard cpu of 1 Jan 20 17:14:20.208: INFO: Resourcequota "e2e-rq-status-pddjc" reports status: hard memory of 1Gi �[1mSTEP:�[0m Repatching "e2e-rq-status-pddjc" /status before checking Spec is unchanged �[38;5;243m01/20/23 17:14:20.242�[0m Jan 20 17:14:20.274: INFO: Resourcequota "e2e-rq-status-pddjc" reports status: hard cpu of 2 Jan 20 17:14:20.274: INFO: Resourcequota "e2e-rq-status-pddjc" reports status: hard memory of 2Gi Jan 20 17:14:20.304: INFO: Found resourceQuota "e2e-rq-status-pddjc" in namespace "resourcequota-6484" with hard status: v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:2, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"2", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:2147483648, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"2Gi", Format:"BinarySI"}} Jan 20 17:15:48.583: INFO: Unexpected error: <*rest.wrapPreviousError | 0xc00362a160>: { currentErr: <*url.Error | 0xc003d42930>{ Op: "Get", URL: "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/resourcequota-6484/resourcequotas/e2e-rq-status-pddjc", Err: <*net.OpError | 0xc00395cb40>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0034da8a0>{ IP: [100, 26, 139, 144], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc00362a120>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, }, previousError: <http2.GoAwayError>{LastStreamID: 217, ErrCode: 0, DebugData: ""}, } Jan 20 17:15:48.583: FAIL: Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/resourcequota-6484/resourcequotas/e2e-rq-status-pddjc": dial tcp 100.26.139.144:443: connect: connection refused - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=217, ErrCode=NO_ERROR, debug="" Full Stack Trace k8s.io/kubernetes/test/e2e/apimachinery.glob..func20.15.6() test/e2e/apimachinery/resource_quota.go:1186 +0x85 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2744911, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 +0x1b k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe5ba8?, 0xc000084048?}, 0x262c61f?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 +0x57 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe5ba8, 0xc000084048}, 0xc002709c20, 0x2fdd8ca?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 +0x10c k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe5ba8, 0xc000084048}, 0x18?, 0x2fdc465?, 0x30?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 +0x9a k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe5ba8, 0xc000084048}, 0xc00361b620?, 0xc004a24d68?, 0x262c967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 +0x4a k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc00066a900?, 0x0?, 0x0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 +0x50 k8s.io/kubernetes/test/e2e/apimachinery.glob..func20.15() test/e2e/apimachinery/resource_quota.go:1184 +0x4425 E0120 17:15:48.583847 6708 runtime.go:79] Observed a panic: types.GinkgoError{Heading:"Your Test Panicked", Message:"When you, or your assertion library, calls Ginkgo's Fail(),\nGinkgo panics to prevent subsequent assertions from running.\n\nNormally Ginkgo rescues this panic so you shouldn't see it.\n\nHowever, if you make an assertion in a goroutine, Ginkgo can't capture the panic.\nTo circumvent this, you should call\n\n\tdefer GinkgoRecover()\n\nat the top of the goroutine that caused this panic.\n\nAlternatively, you may have made an assertion outside of a Ginkgo\nleaf node (e.g. in a container node or some out-of-band function) - please move your assertion to\nan appropriate Ginkgo node (e.g. a BeforeSuite, BeforeEach, It, etc...).", DocLink:"mental-model-how-ginkgo-handles-failure", CodeLocation:types.CodeLocation{FileName:"test/e2e/apimachinery/resource_quota.go", LineNumber:1186, FullStackTrace:"k8s.io/kubernetes/test/e2e/apimachinery.glob..func20.15.6()\n\ttest/e2e/apimachinery/resource_quota.go:1186 +0x85\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2744911, 0x0})\n\tvendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 +0x1b\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe5ba8?, 0xc000084048?}, 0x262c61f?)\n\tvendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 +0x57\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe5ba8, 0xc000084048}, 0xc002709c20, 0x2fdd8ca?)\n\tvendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 +0x10c\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe5ba8, 0xc000084048}, 0x18?, 0x2fdc465?, 0x30?)\n\tvendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 +0x9a\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe5ba8, 0xc000084048}, 0xc00361b620?, 0xc004a24d68?, 0x262c967?)\n\tvendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 +0x4a\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc00066a900?, 0x0?, 0x0?)\n\tvendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 +0x50\nk8s.io/kubernetes/test/e2e/apimachinery.glob..func20.15()\n\ttest/e2e/apimachinery/resource_quota.go:1184 +0x4425", CustomMessage:""}} (�[1m�[38;5;9mYour Test Panicked�[0m �[38;5;243mtest/e2e/apimachinery/resource_quota.go:1186�[0m When you, or your assertion library, calls Ginkgo's Fail(), Ginkgo panics to prevent subsequent assertions from running. Normally Ginkgo rescues this panic so you shouldn't see it. However, if you make an assertion in a goroutine, Ginkgo can't capture the panic. To circumvent this, you should call defer GinkgoRecover() at the top of the goroutine that caused this panic. Alternatively, you may have made an assertion outside of a Ginkgo leaf node (e.g. in a container node or some out-of-band function) - please move your assertion to an appropriate Ginkgo node (e.g. a BeforeSuite, BeforeEach, It, etc...). �[1mLearn more at:�[0m �[38;5;14m�[4mhttp://onsi.github.io/ginkgo/#mental-model-how-ginkgo-handles-failure�[0m ) goroutine 613 [running]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.logPanic({0x70efe60?, 0xc000b4f2d0}) vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:75 +0x99 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash({0x0, 0x0, 0xc000b4f2d0?}) vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:49 +0x75 panic({0x70efe60, 0xc000b4f2d0}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2.Fail({0xc00258e000, 0x16b}, {0xc001a1e750?, 0x75b9afa?, 0xc001a1e770?}) vendor/github.com/onsi/ginkgo/v2/core_dsl.go:352 +0x225 k8s.io/kubernetes/test/e2e/framework.Fail({0xc00323a2c0, 0x156}, {0xc001a1e7e8?, 0xc00323a2c0?, 0xc001a1e810?}) test/e2e/framework/log.go:61 +0x145 k8s.io/kubernetes/test/e2e/framework.ExpectNoErrorWithOffset(0x1, {0x7fb1e80, 0xc00362a160}, {0x0?, 0x0?, 0x0?}) test/e2e/framework/expect.go:76 +0x267 k8s.io/kubernetes/test/e2e/framework.ExpectNoError(...) test/e2e/framework/expect.go:43 k8s.io/kubernetes/test/e2e/apimachinery.glob..func20.15.6() test/e2e/apimachinery/resource_quota.go:1186 +0x85 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2744911, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 +0x1b k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe5ba8?, 0xc000084048?}, 0x262c61f?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 +0x57 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe5ba8, 0xc000084048}, 0xc002709c20, 0x2fdd8ca?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 +0x10c k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe5ba8, 0xc000084048}, 0x18?, 0x2fdc465?, 0x30?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 +0x9a k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe5ba8, 0xc000084048}, 0xc00361b620?, 0xc004a24d68?, 0x262c967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 +0x4a k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc00066a900?, 0x0?, 0x0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 +0x50 k8s.io/kubernetes/test/e2e/apimachinery.glob..func20.15() test/e2e/apimachinery/resource_quota.go:1184 +0x4425 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d5b73e, 0xc003b6c780}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 +0x1b k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 +0x98 created by k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 +0xe3d [AfterEach] [sig-api-machinery] ResourceQuota test/e2e/framework/node/init/init.go:32 Jan 20 17:15:48.584: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-api-machinery] ResourceQuota test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-api-machinery] ResourceQuota dump namespaces | framework.go:196 �[1mSTEP:�[0m dump namespace information after failure �[38;5;243m01/20/23 17:15:48.625�[0m �[1mSTEP:�[0m Collecting events from namespace "resourcequota-6484". �[38;5;243m01/20/23 17:15:48.625�[0m Jan 20 17:15:48.665: INFO: Unexpected error: failed to list events in namespace "resourcequota-6484": <*url.Error | 0xc0034db350>: { Op: "Get", URL: "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/resourcequota-6484/events", Err: <*net.OpError | 0xc002d3ef50>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc003daa8a0>{ IP: [100, 26, 139, 144], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc003440040>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Jan 20 17:15:48.665: FAIL: failed to list events in namespace "resourcequota-6484": Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/resourcequota-6484/events": dial tcp 100.26.139.144:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework/debug.dumpEventsInNamespace(0xc0006625c0, {0xc00066a900, 0x12}) test/e2e/framework/debug/dump.go:44 +0x191 k8s.io/kubernetes/test/e2e/framework/debug.DumpAllNamespaceInfo({0x8022ee8, 0xc0024491e0}, {0xc00066a900, 0x12}) test/e2e/framework/debug/dump.go:62 +0x8d k8s.io/kubernetes/test/e2e/framework/debug/init.init.0.func1.1(0xc000662650?, {0xc00066a900?, 0x7fac780?}) test/e2e/framework/debug/init/init.go:34 +0x32 k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo.func1() test/e2e/framework/framework.go:274 +0x6d k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo(0xc000dfb770) test/e2e/framework/framework.go:271 +0x179 reflect.Value.call({0x662c060?, 0xc000e78fc0?, 0xc0001b4fb0?}, {0x75bb752, 0x4}, {0xae7b420, 0x0, 0xc001d043c8?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x662c060?, 0xc000e78fc0?, 0x2946afc?}, {0xae7b420?, 0xc0001b4f80?, 0xc0024491e0?}) /usr/local/go/src/reflect/value.go:368 +0xbc [DeferCleanup (Each)] [sig-api-machinery] ResourceQuota tear down framework | framework.go:193 �[1mSTEP:�[0m Destroying namespace "resourcequota-6484" for this suite. �[38;5;243m01/20/23 17:15:48.666�[0m Jan 20 17:15:48.707: FAIL: Couldn't delete ns: "resourcequota-6484": Delete "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/resourcequota-6484": dial tcp 100.26.139.144:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/resourcequota-6484", Err:(*net.OpError)(0xc001755d60)}) Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach.func1() test/e2e/framework/framework.go:370 +0x4fe k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc000dfb770) test/e2e/framework/framework.go:383 +0x1ca reflect.Value.call({0x662c060?, 0xc000e78f20?, 0xc0001b8f08?}, {0x75bb752, 0x4}, {0xae7b420, 0x0, 0xc003b6c7b0?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x662c060?, 0xc000e78f20?, 0x0?}, {0xae7b420?, 0x0?, 0x0?}) /usr/local/go/src/reflect/value.go:368 +0xbc
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-api\-machinery\]\sResourceQuota\sshould\screate\sa\sResourceQuota\sand\scapture\sthe\slife\sof\sa\sconfigMap\.\s\[Conformance\]$'
test/e2e/apimachinery/resource_quota.go:332 k8s.io/kubernetes/test/e2e/apimachinery.glob..func20.5.1() test/e2e/apimachinery/resource_quota.go:332 +0xdf k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2744911, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 +0x1b k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe5ba8?, 0xc00012a000?}, 0x262c61f?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 +0x57 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe5ba8, 0xc00012a000}, 0xc001cb37d0, 0x2fdd8ca?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 +0x10c k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe5ba8, 0xc00012a000}, 0x78?, 0x2fdc465?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 +0x9a k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe5ba8, 0xc00012a000}, 0x0?, 0xc003d06cc8?, 0x262c967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 +0x47 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x2?, 0x0?, 0x0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 +0x50 k8s.io/kubernetes/test/e2e/apimachinery.glob..func20.5() test/e2e/apimachinery/resource_quota.go:330 +0x10d There were additional failures detected after the initial failure: [FAILED] Jan 20 17:15:48.641: failed to list events in namespace "resourcequota-9410": Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/resourcequota-9410/events": dial tcp 100.26.139.144:443: connect: connection refused In [DeferCleanup (Each)] at: test/e2e/framework/debug/dump.go:44 ---------- [FAILED] Jan 20 17:15:48.683: Couldn't delete ns: "resourcequota-9410": Delete "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/resourcequota-9410": dial tcp 100.26.139.144:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/resourcequota-9410", Err:(*net.OpError)(0xc000af2f00)}) In [DeferCleanup (Each)] at: test/e2e/framework/framework.go:370from junit_01.xml
[BeforeEach] [sig-api-machinery] ResourceQuota set up framework | framework.go:178 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/20/23 17:15:10.67�[0m Jan 20 17:15:10.670: INFO: >>> kubeConfig: /root/.kube/config �[1mSTEP:�[0m Building a namespace api object, basename resourcequota �[38;5;243m01/20/23 17:15:10.672�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/20/23 17:15:10.766�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/20/23 17:15:10.823�[0m [BeforeEach] [sig-api-machinery] ResourceQuota test/e2e/framework/metrics/init/init.go:31 [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] test/e2e/apimachinery/resource_quota.go:326 Jan 20 17:15:48.559: INFO: Unexpected error: <*rest.wrapPreviousError | 0xc006c19da0>: { currentErr: <*url.Error | 0xc00229cae0>{ Op: "Get", URL: "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/resourcequota-9410/configmaps", Err: <*net.OpError | 0xc0040f41e0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0024a90b0>{ IP: [100, 26, 139, 144], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc006c19d60>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, }, previousError: <http2.GoAwayError>{LastStreamID: 331, ErrCode: 0, DebugData: ""}, } Jan 20 17:15:48.559: FAIL: Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/resourcequota-9410/configmaps": dial tcp 100.26.139.144:443: connect: connection refused - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=331, ErrCode=NO_ERROR, debug="" Full Stack Trace k8s.io/kubernetes/test/e2e/apimachinery.glob..func20.5.1() test/e2e/apimachinery/resource_quota.go:332 +0xdf k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2744911, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 +0x1b k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe5ba8?, 0xc00012a000?}, 0x262c61f?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 +0x57 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe5ba8, 0xc00012a000}, 0xc001cb37d0, 0x2fdd8ca?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 +0x10c k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe5ba8, 0xc00012a000}, 0x78?, 0x2fdc465?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 +0x9a k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe5ba8, 0xc00012a000}, 0x0?, 0xc003d06cc8?, 0x262c967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 +0x47 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x2?, 0x0?, 0x0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 +0x50 k8s.io/kubernetes/test/e2e/apimachinery.glob..func20.5() test/e2e/apimachinery/resource_quota.go:330 +0x10d E0120 17:15:48.559411 6728 runtime.go:79] Observed a panic: types.GinkgoError{Heading:"Your Test Panicked", Message:"When you, or your assertion library, calls Ginkgo's Fail(),\nGinkgo panics to prevent subsequent assertions from running.\n\nNormally Ginkgo rescues this panic so you shouldn't see it.\n\nHowever, if you make an assertion in a goroutine, Ginkgo can't capture the panic.\nTo circumvent this, you should call\n\n\tdefer GinkgoRecover()\n\nat the top of the goroutine that caused this panic.\n\nAlternatively, you may have made an assertion outside of a Ginkgo\nleaf node (e.g. in a container node or some out-of-band function) - please move your assertion to\nan appropriate Ginkgo node (e.g. a BeforeSuite, BeforeEach, It, etc...).", DocLink:"mental-model-how-ginkgo-handles-failure", CodeLocation:types.CodeLocation{FileName:"test/e2e/apimachinery/resource_quota.go", LineNumber:332, FullStackTrace:"k8s.io/kubernetes/test/e2e/apimachinery.glob..func20.5.1()\n\ttest/e2e/apimachinery/resource_quota.go:332 +0xdf\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2744911, 0x0})\n\tvendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 +0x1b\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe5ba8?, 0xc00012a000?}, 0x262c61f?)\n\tvendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 +0x57\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe5ba8, 0xc00012a000}, 0xc001cb37d0, 0x2fdd8ca?)\n\tvendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 +0x10c\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe5ba8, 0xc00012a000}, 0x78?, 0x2fdc465?, 0x20?)\n\tvendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 +0x9a\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe5ba8, 0xc00012a000}, 0x0?, 0xc003d06cc8?, 0x262c967?)\n\tvendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 +0x47\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x2?, 0x0?, 0x0?)\n\tvendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 +0x50\nk8s.io/kubernetes/test/e2e/apimachinery.glob..func20.5()\n\ttest/e2e/apimachinery/resource_quota.go:330 +0x10d", CustomMessage:""}} (�[1m�[38;5;9mYour Test Panicked�[0m �[38;5;243mtest/e2e/apimachinery/resource_quota.go:332�[0m When you, or your assertion library, calls Ginkgo's Fail(), Ginkgo panics to prevent subsequent assertions from running. Normally Ginkgo rescues this panic so you shouldn't see it. However, if you make an assertion in a goroutine, Ginkgo can't capture the panic. To circumvent this, you should call defer GinkgoRecover() at the top of the goroutine that caused this panic. Alternatively, you may have made an assertion outside of a Ginkgo leaf node (e.g. in a container node or some out-of-band function) - please move your assertion to an appropriate Ginkgo node (e.g. a BeforeSuite, BeforeEach, It, etc...). �[1mLearn more at:�[0m �[38;5;14m�[4mhttp://onsi.github.io/ginkgo/#mental-model-how-ginkgo-handles-failure�[0m ) goroutine 2980 [running]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.logPanic({0x70efe60?, 0xc00076b110}) vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:75 +0x99 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash({0x0, 0x0, 0xc00076b110?}) vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:49 +0x75 panic({0x70efe60, 0xc00076b110}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2.Fail({0xc006bb6000, 0x153}, {0xc0008db960?, 0x75b9afa?, 0xc0008db980?}) vendor/github.com/onsi/ginkgo/v2/core_dsl.go:352 +0x225 k8s.io/kubernetes/test/e2e/framework.Fail({0xc004492780, 0x13e}, {0xc0008db9f8?, 0xc004492780?, 0xc0008dba20?}) test/e2e/framework/log.go:61 +0x145 k8s.io/kubernetes/test/e2e/framework.ExpectNoErrorWithOffset(0x1, {0x7fb1e80, 0xc006c19da0}, {0x0?, 0x0?, 0x0?}) test/e2e/framework/expect.go:76 +0x267 k8s.io/kubernetes/test/e2e/framework.ExpectNoError(...) test/e2e/framework/expect.go:43 k8s.io/kubernetes/test/e2e/apimachinery.glob..func20.5.1() test/e2e/apimachinery/resource_quota.go:332 +0xdf k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2744911, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 +0x1b k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe5ba8?, 0xc00012a000?}, 0x262c61f?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 +0x57 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe5ba8, 0xc00012a000}, 0xc001cb37d0, 0x2fdd8ca?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 +0x10c k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe5ba8, 0xc00012a000}, 0x78?, 0x2fdc465?, 0x20?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 +0x9a k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x7fe5ba8, 0xc00012a000}, 0x0?, 0xc003d06cc8?, 0x262c967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:460 +0x47 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0x2?, 0x0?, 0x0?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 +0x50 k8s.io/kubernetes/test/e2e/apimachinery.glob..func20.5() test/e2e/apimachinery/resource_quota.go:330 +0x10d k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc0047f0bd0, 0xc0045f1a40}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 +0x1b k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 +0x98 created by k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 +0xe3d [AfterEach] [sig-api-machinery] ResourceQuota test/e2e/framework/node/init/init.go:32 Jan 20 17:15:48.559: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-api-machinery] ResourceQuota test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-api-machinery] ResourceQuota dump namespaces | framework.go:196 �[1mSTEP:�[0m dump namespace information after failure �[38;5;243m01/20/23 17:15:48.601�[0m �[1mSTEP:�[0m Collecting events from namespace "resourcequota-9410". �[38;5;243m01/20/23 17:15:48.601�[0m Jan 20 17:15:48.641: INFO: Unexpected error: failed to list events in namespace "resourcequota-9410": <*url.Error | 0xc0016f5b30>: { Op: "Get", URL: "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/resourcequota-9410/events", Err: <*net.OpError | 0xc000af2a00>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc002992ff0>{ IP: [100, 26, 139, 144], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc000ee3020>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Jan 20 17:15:48.641: FAIL: failed to list events in namespace "resourcequota-9410": Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/resourcequota-9410/events": dial tcp 100.26.139.144:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework/debug.dumpEventsInNamespace(0xc00296e5c0, {0xc001cb3458, 0x12}) test/e2e/framework/debug/dump.go:44 +0x191 k8s.io/kubernetes/test/e2e/framework/debug.DumpAllNamespaceInfo({0x8022ee8, 0xc006b88000}, {0xc001cb3458, 0x12}) test/e2e/framework/debug/dump.go:62 +0x8d k8s.io/kubernetes/test/e2e/framework/debug/init.init.0.func1.1(0xc00296e650?, {0xc001cb3458?, 0x7fac780?}) test/e2e/framework/debug/init/init.go:34 +0x32 k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo.func1() test/e2e/framework/framework.go:274 +0x6d k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo(0xc000b2b770) test/e2e/framework/framework.go:271 +0x179 reflect.Value.call({0x662c060?, 0xc001514940?, 0xc0067f7fb0?}, {0x75bb752, 0x4}, {0xae7b420, 0x0, 0xc006df03c8?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x662c060?, 0xc001514940?, 0x2946afc?}, {0xae7b420?, 0xc0067f7f80?, 0x2d6023d?}) /usr/local/go/src/reflect/value.go:368 +0xbc [DeferCleanup (Each)] [sig-api-machinery] ResourceQuota tear down framework | framework.go:193 �[1mSTEP:�[0m Destroying namespace "resourcequota-9410" for this suite. �[38;5;243m01/20/23 17:15:48.641�[0m Jan 20 17:15:48.683: FAIL: Couldn't delete ns: "resourcequota-9410": Delete "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/resourcequota-9410": dial tcp 100.26.139.144:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/resourcequota-9410", Err:(*net.OpError)(0xc000af2f00)}) Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach.func1() test/e2e/framework/framework.go:370 +0x4fe k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc000b2b770) test/e2e/framework/framework.go:383 +0x1ca reflect.Value.call({0x662c060?, 0xc001514880?, 0x4?}, {0x75bb752, 0x4}, {0xae7b420, 0x0, 0xc0045f8f18?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x662c060?, 0xc001514880?, 0x1?}, {0xae7b420?, 0xc005232301?, 0x1?}) /usr/local/go/src/reflect/value.go:368 +0xbc
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-apps\]\sStatefulSet\sBasic\sStatefulSet\sfunctionality\s\[StatefulSetBasic\]\sShould\srecreate\sevicted\sstatefulset\s\[Conformance\]$'
test/e2e/framework/statefulset/rest.go:69 k8s.io/kubernetes/test/e2e/framework/statefulset.GetPodList({0x8022ee8, 0xc002ae36c0}, 0xc0003e4f00) test/e2e/framework/statefulset/rest.go:69 +0x153 k8s.io/kubernetes/test/e2e/framework/statefulset.Scale.func2() test/e2e/framework/statefulset/rest.go:155 +0x35 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2744911, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 +0x1b k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe5ba8?, 0xc000084048?}, 0x8022ee8?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 +0x57 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe5ba8, 0xc000084048}, 0xc0006d4480, 0x2fdd8ca?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 +0x10c k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe5ba8, 0xc000084048}, 0x78?, 0x2fdc465?, 0x28?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 +0x9a k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe5ba8, 0xc000084048}, 0x75b9afa?, 0xc00034a8c8?, 0x262c967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 +0x4a k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x8022ee8?, 0xc002ae36c0?, 0xc003d08040?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 +0x50 k8s.io/kubernetes/test/e2e/framework/statefulset.Scale({0x8022ee8?, 0xc002ae36c0}, 0x0?, 0x0) test/e2e/framework/statefulset/rest.go:154 +0x22d k8s.io/kubernetes/test/e2e/framework/statefulset.DeleteAllStatefulSets({0x8022ee8, 0xc002ae36c0}, {0xc0038b2680, 0x10}) test/e2e/framework/statefulset/rest.go:84 +0x1d7 k8s.io/kubernetes/test/e2e/apps.glob..func10.2.2() test/e2e/apps/statefulset.go:129 +0x1b2 There were additional failures detected after the initial failure: [FAILED] Jan 20 17:15:48.650: failed to list events in namespace "statefulset-8827": Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/statefulset-8827/events": dial tcp 100.26.139.144:443: connect: connection refused In [DeferCleanup (Each)] at: test/e2e/framework/debug/dump.go:44 ---------- [FAILED] Jan 20 17:15:48.698: Couldn't delete ns: "statefulset-8827": Delete "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/statefulset-8827": dial tcp 100.26.139.144:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/statefulset-8827", Err:(*net.OpError)(0xc003d84140)}) In [DeferCleanup (Each)] at: test/e2e/framework/framework.go:370from junit_01.xml
[BeforeEach] [sig-apps] StatefulSet set up framework | framework.go:178 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/20/23 17:15:05.39�[0m Jan 20 17:15:05.390: INFO: >>> kubeConfig: /root/.kube/config �[1mSTEP:�[0m Building a namespace api object, basename statefulset �[38;5;243m01/20/23 17:15:05.391�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/20/23 17:15:05.527�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/20/23 17:15:05.611�[0m [BeforeEach] [sig-apps] StatefulSet test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-apps] StatefulSet test/e2e/apps/statefulset.go:98 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] test/e2e/apps/statefulset.go:113 �[1mSTEP:�[0m Creating service test in namespace statefulset-8827 �[38;5;243m01/20/23 17:15:05.68�[0m [It] Should recreate evicted statefulset [Conformance] test/e2e/apps/statefulset.go:739 �[1mSTEP:�[0m Looking for a node to schedule stateful set and pod �[38;5;243m01/20/23 17:15:05.741�[0m �[1mSTEP:�[0m Creating pod with conflicting port in namespace statefulset-8827 �[38;5;243m01/20/23 17:15:05.78�[0m �[1mSTEP:�[0m Waiting until pod test-pod will start running in namespace statefulset-8827 �[38;5;243m01/20/23 17:15:05.83�[0m Jan 20 17:15:05.830: INFO: Waiting up to 5m0s for pod "test-pod" in namespace "statefulset-8827" to be "running" Jan 20 17:15:05.877: INFO: Pod "test-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 46.142827ms Jan 20 17:15:07.910: INFO: Pod "test-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 2.079203375s Jan 20 17:15:09.917: INFO: Pod "test-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 4.086456887s Jan 20 17:15:11.909: INFO: Pod "test-pod": Phase="Running", Reason="", readiness=true. Elapsed: 6.078716293s Jan 20 17:15:11.909: INFO: Pod "test-pod" satisfied condition "running" �[1mSTEP:�[0m Creating statefulset with conflicting port in namespace statefulset-8827 �[38;5;243m01/20/23 17:15:11.909�[0m �[1mSTEP:�[0m Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-8827 �[38;5;243m01/20/23 17:15:11.947�[0m Jan 20 17:15:11.991: INFO: Observed stateful pod in namespace: statefulset-8827, name: ss-0, uid: 422cb4bd-065d-4e5c-968a-018ad0445a47, status phase: Pending. Waiting for statefulset controller to delete. Jan 20 17:15:12.916: INFO: Observed stateful pod in namespace: statefulset-8827, name: ss-0, uid: 422cb4bd-065d-4e5c-968a-018ad0445a47, status phase: Failed. Waiting for statefulset controller to delete. Jan 20 17:15:12.951: INFO: Observed stateful pod in namespace: statefulset-8827, name: ss-0, uid: 422cb4bd-065d-4e5c-968a-018ad0445a47, status phase: Failed. Waiting for statefulset controller to delete. Jan 20 17:15:12.969: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-8827 �[1mSTEP:�[0m Removing pod with conflicting port in namespace statefulset-8827 �[38;5;243m01/20/23 17:15:12.969�[0m �[1mSTEP:�[0m Waiting when stateful pod ss-0 will be recreated in namespace statefulset-8827 and will be in running state �[38;5;243m01/20/23 17:15:13.011�[0m [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] test/e2e/apps/statefulset.go:124 Jan 20 17:15:23.207: INFO: Deleting all statefulset in ns statefulset-8827 Jan 20 17:15:23.237: INFO: Scaling statefulset ss to 0 Jan 20 17:15:48.562: INFO: Unexpected error: <*rest.wrapPreviousError | 0xc0036b4700>: { currentErr: <*url.Error | 0xc0038ff530>{ Op: "Get", URL: "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/statefulset-8827/pods?labelSelector=baz%3Dblah%2Cfoo%3Dbar", Err: <*net.OpError | 0xc003934be0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc003702cc0>{ IP: [100, 26, 139, 144], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc0036b46c0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, }, previousError: <http2.GoAwayError>{LastStreamID: 321, ErrCode: 0, DebugData: ""}, } Jan 20 17:15:48.563: FAIL: Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/statefulset-8827/pods?labelSelector=baz%3Dblah%2Cfoo%3Dbar": dial tcp 100.26.139.144:443: connect: connection refused - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=321, ErrCode=NO_ERROR, debug="" Full Stack Trace k8s.io/kubernetes/test/e2e/framework/statefulset.GetPodList({0x8022ee8, 0xc002ae36c0}, 0xc0003e4f00) test/e2e/framework/statefulset/rest.go:69 +0x153 k8s.io/kubernetes/test/e2e/framework/statefulset.Scale.func2() test/e2e/framework/statefulset/rest.go:155 +0x35 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2744911, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 +0x1b k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe5ba8?, 0xc000084048?}, 0x8022ee8?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 +0x57 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe5ba8, 0xc000084048}, 0xc0006d4480, 0x2fdd8ca?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 +0x10c k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe5ba8, 0xc000084048}, 0x78?, 0x2fdc465?, 0x28?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 +0x9a k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe5ba8, 0xc000084048}, 0x75b9afa?, 0xc00034a8c8?, 0x262c967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 +0x4a k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x8022ee8?, 0xc002ae36c0?, 0xc003d08040?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 +0x50 k8s.io/kubernetes/test/e2e/framework/statefulset.Scale({0x8022ee8?, 0xc002ae36c0}, 0x0?, 0x0) test/e2e/framework/statefulset/rest.go:154 +0x22d k8s.io/kubernetes/test/e2e/framework/statefulset.DeleteAllStatefulSets({0x8022ee8, 0xc002ae36c0}, {0xc0038b2680, 0x10}) test/e2e/framework/statefulset/rest.go:84 +0x1d7 k8s.io/kubernetes/test/e2e/apps.glob..func10.2.2() test/e2e/apps/statefulset.go:129 +0x1b2 E0120 17:15:48.563574 6645 runtime.go:79] Observed a panic: types.GinkgoError{Heading:"Your Test Panicked", Message:"When you, or your assertion library, calls Ginkgo's Fail(),\nGinkgo panics to prevent subsequent assertions from running.\n\nNormally Ginkgo rescues this panic so you shouldn't see it.\n\nHowever, if you make an assertion in a goroutine, Ginkgo can't capture the panic.\nTo circumvent this, you should call\n\n\tdefer GinkgoRecover()\n\nat the top of the goroutine that caused this panic.\n\nAlternatively, you may have made an assertion outside of a Ginkgo\nleaf node (e.g. in a container node or some out-of-band function) - please move your assertion to\nan appropriate Ginkgo node (e.g. a BeforeSuite, BeforeEach, It, etc...).", DocLink:"mental-model-how-ginkgo-handles-failure", CodeLocation:types.CodeLocation{FileName:"test/e2e/framework/statefulset/rest.go", LineNumber:69, FullStackTrace:"k8s.io/kubernetes/test/e2e/framework/statefulset.GetPodList({0x8022ee8, 0xc002ae36c0}, 0xc0003e4f00)\n\ttest/e2e/framework/statefulset/rest.go:69 +0x153\nk8s.io/kubernetes/test/e2e/framework/statefulset.Scale.func2()\n\ttest/e2e/framework/statefulset/rest.go:155 +0x35\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2744911, 0x0})\n\tvendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 +0x1b\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe5ba8?, 0xc000084048?}, 0x8022ee8?)\n\tvendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 +0x57\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe5ba8, 0xc000084048}, 0xc0006d4480, 0x2fdd8ca?)\n\tvendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 +0x10c\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe5ba8, 0xc000084048}, 0x78?, 0x2fdc465?, 0x28?)\n\tvendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 +0x9a\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe5ba8, 0xc000084048}, 0x75b9afa?, 0xc00034a8c8?, 0x262c967?)\n\tvendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 +0x4a\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x8022ee8?, 0xc002ae36c0?, 0xc003d08040?)\n\tvendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 +0x50\nk8s.io/kubernetes/test/e2e/framework/statefulset.Scale({0x8022ee8?, 0xc002ae36c0}, 0x0?, 0x0)\n\ttest/e2e/framework/statefulset/rest.go:154 +0x22d\nk8s.io/kubernetes/test/e2e/framework/statefulset.DeleteAllStatefulSets({0x8022ee8, 0xc002ae36c0}, {0xc0038b2680, 0x10})\n\ttest/e2e/framework/statefulset/rest.go:84 +0x1d7\nk8s.io/kubernetes/test/e2e/apps.glob..func10.2.2()\n\ttest/e2e/apps/statefulset.go:129 +0x1b2", CustomMessage:""}} (�[1m�[38;5;9mYour Test Panicked�[0m �[38;5;243mtest/e2e/framework/statefulset/rest.go:69�[0m When you, or your assertion library, calls Ginkgo's Fail(), Ginkgo panics to prevent subsequent assertions from running. Normally Ginkgo rescues this panic so you shouldn't see it. However, if you make an assertion in a goroutine, Ginkgo can't capture the panic. To circumvent this, you should call defer GinkgoRecover() at the top of the goroutine that caused this panic. Alternatively, you may have made an assertion outside of a Ginkgo leaf node (e.g. in a container node or some out-of-band function) - please move your assertion to an appropriate Ginkgo node (e.g. a BeforeSuite, BeforeEach, It, etc...). �[1mLearn more at:�[0m �[38;5;14m�[4mhttp://onsi.github.io/ginkgo/#mental-model-how-ginkgo-handles-failure�[0m ) goroutine 441 [running]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.logPanic({0x70efe60?, 0xc00026af50}) vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:75 +0x99 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash({0x0, 0x0, 0xc00026af50?}) vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:49 +0x75 panic({0x70efe60, 0xc00026af50}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2.Fail({0xc002b12000, 0x170}, {0xc00034a518?, 0x75b9afa?, 0xc00034a538?}) vendor/github.com/onsi/ginkgo/v2/core_dsl.go:352 +0x225 k8s.io/kubernetes/test/e2e/framework.Fail({0xc002f5e2c0, 0x15b}, {0xc00034a5b0?, 0xc002f5e2c0?, 0xc00034a5d8?}) test/e2e/framework/log.go:61 +0x145 k8s.io/kubernetes/test/e2e/framework.ExpectNoErrorWithOffset(0x1, {0x7fb1e80, 0xc0036b4700}, {0x0?, 0xc003d08920?, 0x10?}) test/e2e/framework/expect.go:76 +0x267 k8s.io/kubernetes/test/e2e/framework.ExpectNoError(...) test/e2e/framework/expect.go:43 k8s.io/kubernetes/test/e2e/framework/statefulset.GetPodList({0x8022ee8, 0xc002ae36c0}, 0xc0003e4f00) test/e2e/framework/statefulset/rest.go:69 +0x153 k8s.io/kubernetes/test/e2e/framework/statefulset.Scale.func2() test/e2e/framework/statefulset/rest.go:155 +0x35 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x2744911, 0x0}) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 +0x1b k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7fe5ba8?, 0xc000084048?}, 0x8022ee8?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 +0x57 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x7fe5ba8, 0xc000084048}, 0xc0006d4480, 0x2fdd8ca?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:662 +0x10c k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7fe5ba8, 0xc000084048}, 0x78?, 0x2fdc465?, 0x28?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:596 +0x9a k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7fe5ba8, 0xc000084048}, 0x75b9afa?, 0xc00034a8c8?, 0x262c967?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 +0x4a k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x8022ee8?, 0xc002ae36c0?, 0xc003d08040?) vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 +0x50 k8s.io/kubernetes/test/e2e/framework/statefulset.Scale({0x8022ee8?, 0xc002ae36c0}, 0x0?, 0x0) test/e2e/framework/statefulset/rest.go:154 +0x22d k8s.io/kubernetes/test/e2e/framework/statefulset.DeleteAllStatefulSets({0x8022ee8, 0xc002ae36c0}, {0xc0038b2680, 0x10}) test/e2e/framework/statefulset/rest.go:84 +0x1d7 k8s.io/kubernetes/test/e2e/apps.glob..func10.2.2() test/e2e/apps/statefulset.go:129 +0x1b2 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2d5b74e, 0xc00049fb00}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:449 +0x1b k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:750 +0x98 created by k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:738 +0xe3d [AfterEach] [sig-apps] StatefulSet test/e2e/framework/node/init/init.go:32 Jan 20 17:15:48.564: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-apps] StatefulSet test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-apps] StatefulSet dump namespaces | framework.go:196 �[1mSTEP:�[0m dump namespace information after failure �[38;5;243m01/20/23 17:15:48.606�[0m �[1mSTEP:�[0m Collecting events from namespace "statefulset-8827". �[38;5;243m01/20/23 17:15:48.607�[0m Jan 20 17:15:48.649: INFO: Unexpected error: failed to list events in namespace "statefulset-8827": <*url.Error | 0xc003d807e0>: { Op: "Get", URL: "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/statefulset-8827/events", Err: <*net.OpError | 0xc003d86cd0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0016ff140>{ IP: [100, 26, 139, 144], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc0038ae080>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Jan 20 17:15:48.650: FAIL: failed to list events in namespace "statefulset-8827": Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/statefulset-8827/events": dial tcp 100.26.139.144:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework/debug.dumpEventsInNamespace(0xc0017265c0, {0xc0038b2680, 0x10}) test/e2e/framework/debug/dump.go:44 +0x191 k8s.io/kubernetes/test/e2e/framework/debug.DumpAllNamespaceInfo({0x8022ee8, 0xc002ae36c0}, {0xc0038b2680, 0x10}) test/e2e/framework/debug/dump.go:62 +0x8d k8s.io/kubernetes/test/e2e/framework/debug/init.init.0.func1.1(0xc001726650?, {0xc0038b2680?, 0x7fac780?}) test/e2e/framework/debug/init/init.go:34 +0x32 k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo.func1() test/e2e/framework/framework.go:274 +0x6d k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo(0xc000ada2d0) test/e2e/framework/framework.go:271 +0x179 reflect.Value.call({0x662c060?, 0xc000af5b80?, 0xc0007edfb0?}, {0x75bb752, 0x4}, {0xae7b420, 0x0, 0xc000aa9a88?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x662c060?, 0xc000af5b80?, 0x2946afc?}, {0xae7b420?, 0xc0007edf80?, 0x0?}) /usr/local/go/src/reflect/value.go:368 +0xbc [DeferCleanup (Each)] [sig-apps] StatefulSet tear down framework | framework.go:193 �[1mSTEP:�[0m Destroying namespace "statefulset-8827" for this suite. �[38;5;243m01/20/23 17:15:48.65�[0m Jan 20 17:15:48.698: FAIL: Couldn't delete ns: "statefulset-8827": Delete "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/statefulset-8827": dial tcp 100.26.139.144:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/statefulset-8827", Err:(*net.OpError)(0xc003d84140)}) Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach.func1() test/e2e/framework/framework.go:370 +0x4fe k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc000ada2d0) test/e2e/framework/framework.go:383 +0x1ca reflect.Value.call({0x662c060?, 0xc000af5a70?, 0xc0007eaf08?}, {0x75bb752, 0x4}, {0xae7b420, 0x0, 0xc001a37e30?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x662c060?, 0xc000af5a70?, 0x0?}, {0xae7b420?, 0x0?, 0x0?}) /usr/local/go/src/reflect/value.go:368 +0xbc
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-network\]\sConntrack\sshould\sdrop\sINVALID\sconntrack\sentries\s\[Privileged\]$'
test/e2e/network/conntrack.go:473 k8s.io/kubernetes/test/e2e/network.glob..func1.6() test/e2e/network/conntrack.go:473 +0xc5e There were additional failures detected after the initial failure: [FAILED] Jan 20 17:16:08.338: failed to list events in namespace "conntrack-6497": Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/conntrack-6497/events": dial tcp 100.26.139.144:443: connect: connection refused In [DeferCleanup (Each)] at: test/e2e/framework/debug/dump.go:44 ---------- [FAILED] Jan 20 17:16:08.383: Couldn't delete ns: "conntrack-6497": Delete "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/conntrack-6497": dial tcp 100.26.139.144:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/conntrack-6497", Err:(*net.OpError)(0xc0034d28c0)}) In [DeferCleanup (Each)] at: test/e2e/framework/framework.go:370from junit_01.xml
[BeforeEach] [sig-network] Conntrack set up framework | framework.go:178 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/20/23 17:14:35.2�[0m Jan 20 17:14:35.200: INFO: >>> kubeConfig: /root/.kube/config �[1mSTEP:�[0m Building a namespace api object, basename conntrack �[38;5;243m01/20/23 17:14:35.201�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/20/23 17:14:35.297�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/20/23 17:14:35.357�[0m [BeforeEach] [sig-network] Conntrack test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-network] Conntrack test/e2e/network/conntrack.go:98 [It] should drop INVALID conntrack entries [Privileged] test/e2e/network/conntrack.go:363 Jan 20 17:14:35.485: INFO: Waiting up to 5m0s for pod "boom-server" in namespace "conntrack-6497" to be "running and ready" Jan 20 17:14:35.515: INFO: Pod "boom-server": Phase="Pending", Reason="", readiness=false. Elapsed: 30.242409ms Jan 20 17:14:35.515: INFO: The phase of Pod boom-server is Pending, waiting for it to be Running (with Ready = true) Jan 20 17:14:37.546: INFO: Pod "boom-server": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0608998s Jan 20 17:14:37.546: INFO: The phase of Pod boom-server is Pending, waiting for it to be Running (with Ready = true) Jan 20 17:14:39.553: INFO: Pod "boom-server": Phase="Pending", Reason="", readiness=false. Elapsed: 4.067487323s Jan 20 17:14:39.553: INFO: The phase of Pod boom-server is Pending, waiting for it to be Running (with Ready = true) Jan 20 17:14:41.549: INFO: Pod "boom-server": Phase="Running", Reason="", readiness=true. Elapsed: 6.063834332s Jan 20 17:14:41.549: INFO: The phase of Pod boom-server is Running (Ready = true) Jan 20 17:14:41.549: INFO: Pod "boom-server" satisfied condition "running and ready" �[1mSTEP:�[0m Server pod created on node i-03af3dbca738ba168 �[38;5;243m01/20/23 17:14:41.58�[0m �[1mSTEP:�[0m Server service created �[38;5;243m01/20/23 17:14:41.621�[0m Jan 20 17:14:41.662: INFO: Waiting up to 5m0s for pod "startup-script" in namespace "conntrack-6497" to be "running and ready" Jan 20 17:14:41.694: INFO: Pod "startup-script": Phase="Pending", Reason="", readiness=false. Elapsed: 32.002503ms Jan 20 17:14:41.694: INFO: The phase of Pod startup-script is Pending, waiting for it to be Running (with Ready = true) Jan 20 17:14:43.728: INFO: Pod "startup-script": Phase="Pending", Reason="", readiness=false. Elapsed: 2.066094005s Jan 20 17:14:43.728: INFO: The phase of Pod startup-script is Pending, waiting for it to be Running (with Ready = true) Jan 20 17:14:45.725: INFO: Pod "startup-script": Phase="Pending", Reason="", readiness=false. Elapsed: 4.063507321s Jan 20 17:14:45.725: INFO: The phase of Pod startup-script is Pending, waiting for it to be Running (with Ready = true) Jan 20 17:14:47.725: INFO: Pod "startup-script": Phase="Pending", Reason="", readiness=false. Elapsed: 6.063732312s Jan 20 17:14:47.725: INFO: The phase of Pod startup-script is Pending, waiting for it to be Running (with Ready = true) Jan 20 17:14:49.729: INFO: Pod "startup-script": Phase="Pending", Reason="", readiness=false. Elapsed: 8.067879136s Jan 20 17:14:49.730: INFO: The phase of Pod startup-script is Pending, waiting for it to be Running (with Ready = true) Jan 20 17:14:51.725: INFO: Pod "startup-script": Phase="Pending", Reason="", readiness=false. Elapsed: 10.063597565s Jan 20 17:14:51.725: INFO: The phase of Pod startup-script is Pending, waiting for it to be Running (with Ready = true) Jan 20 17:14:53.725: INFO: Pod "startup-script": Phase="Pending", Reason="", readiness=false. Elapsed: 12.063334066s Jan 20 17:14:53.725: INFO: The phase of Pod startup-script is Pending, waiting for it to be Running (with Ready = true) Jan 20 17:14:55.725: INFO: Pod "startup-script": Phase="Pending", Reason="", readiness=false. Elapsed: 14.063301103s Jan 20 17:14:55.725: INFO: The phase of Pod startup-script is Pending, waiting for it to be Running (with Ready = true) Jan 20 17:14:57.736: INFO: Pod "startup-script": Phase="Running", Reason="", readiness=true. Elapsed: 16.07419264s Jan 20 17:14:57.736: INFO: The phase of Pod startup-script is Running (Ready = true) Jan 20 17:14:57.736: INFO: Pod "startup-script" satisfied condition "running and ready" �[1mSTEP:�[0m Client pod created �[38;5;243m01/20/23 17:14:57.772�[0m �[1mSTEP:�[0m checking client pod does not RST the TCP connection because it receives an INVALID packet �[38;5;243m01/20/23 17:14:57.772�[0m Jan 20 17:16:08.236: INFO: Unexpected error: <*url.Error | 0xc00313a5a0>: { Op: "Get", URL: "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/conntrack-6497/pods/boom-server/log?container=boom-server&previous=false", Err: <*net.OpError | 0xc0031be190>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc002d0a5a0>{ IP: [100, 26, 139, 144], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc003fbc020>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Jan 20 17:16:08.236: FAIL: Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/conntrack-6497/pods/boom-server/log?container=boom-server&previous=false": dial tcp 100.26.139.144:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/network.glob..func1.6() test/e2e/network/conntrack.go:473 +0xc5e [AfterEach] [sig-network] Conntrack test/e2e/framework/node/init/init.go:32 Jan 20 17:16:08.237: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-network] Conntrack test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-network] Conntrack dump namespaces | framework.go:196 �[1mSTEP:�[0m dump namespace information after failure �[38;5;243m01/20/23 17:16:08.281�[0m �[1mSTEP:�[0m Collecting events from namespace "conntrack-6497". �[38;5;243m01/20/23 17:16:08.281�[0m Jan 20 17:16:08.338: INFO: Unexpected error: failed to list events in namespace "conntrack-6497": <*url.Error | 0xc00313b3e0>: { Op: "Get", URL: "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/conntrack-6497/events", Err: <*net.OpError | 0xc0031be6e0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc002d0b5c0>{ IP: [100, 26, 139, 144], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc003fbc5a0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Jan 20 17:16:08.338: FAIL: failed to list events in namespace "conntrack-6497": Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/conntrack-6497/events": dial tcp 100.26.139.144:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework/debug.dumpEventsInNamespace(0xc0011925c0, {0xc0037e0620, 0xe}) test/e2e/framework/debug/dump.go:44 +0x191 k8s.io/kubernetes/test/e2e/framework/debug.DumpAllNamespaceInfo({0x8022ee8, 0xc00359c1a0}, {0xc0037e0620, 0xe}) test/e2e/framework/debug/dump.go:62 +0x8d k8s.io/kubernetes/test/e2e/framework/debug/init.init.0.func1.1(0xc001192650?, {0xc0037e0620?, 0x7fac780?}) test/e2e/framework/debug/init/init.go:34 +0x32 k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo.func1() test/e2e/framework/framework.go:274 +0x6d k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo(0xc000e8ab40) test/e2e/framework/framework.go:271 +0x179 reflect.Value.call({0x662c060?, 0xc00115b7d0?, 0xc001ab0fb0?}, {0x75bb752, 0x4}, {0xae7b420, 0x0, 0xc000fbcbe8?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x662c060?, 0xc00115b7d0?, 0x2946afc?}, {0xae7b420?, 0xc001ab0f80?, 0x2a6f866?}) /usr/local/go/src/reflect/value.go:368 +0xbc [DeferCleanup (Each)] [sig-network] Conntrack tear down framework | framework.go:193 �[1mSTEP:�[0m Destroying namespace "conntrack-6497" for this suite. �[38;5;243m01/20/23 17:16:08.339�[0m Jan 20 17:16:08.383: FAIL: Couldn't delete ns: "conntrack-6497": Delete "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/conntrack-6497": dial tcp 100.26.139.144:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/conntrack-6497", Err:(*net.OpError)(0xc0034d28c0)}) Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach.func1() test/e2e/framework/framework.go:370 +0x4fe k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc000e8ab40) test/e2e/framework/framework.go:383 +0x1ca reflect.Value.call({0x662c060?, 0xc00115b6f0?, 0xc005510f50?}, {0x75bb752, 0x4}, {0xae7b420, 0x0, 0xc005510f40?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x662c060?, 0xc00115b6f0?, 0x2624c40?}, {0xae7b420?, 0xc005510f80?, 0x26245bd?}) /usr/local/go/src/reflect/value.go:368 +0xbc
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-network\]\sNetworking\sGranular\sChecks\:\sServices\sshould\sfunction\sfor\sendpoint\-Service\:\sudp$'
test/e2e/framework/network/utils.go:866 k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).createNetProxyPods(0xc005256000, {0x75cb852, 0x9}, 0xc0050a1080) test/e2e/framework/network/utils.go:866 +0x1d0 k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setupCore(0xc005256000, 0x7fb380595188?) test/e2e/framework/network/utils.go:763 +0x55 k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setup(0xc005256000, 0x3e?) test/e2e/framework/network/utils.go:778 +0x3e k8s.io/kubernetes/test/e2e/framework/network.NewNetworkingTestConfig(0xc000ea4690, {0x0, 0x0, 0x7fb3aaec05b8?}) test/e2e/framework/network/utils.go:131 +0x125 k8s.io/kubernetes/test/e2e/network.glob..func22.6.8() test/e2e/network/networking.go:251 +0x36 There were additional failures detected after the initial failure: [FAILED] Jan 20 17:15:48.644: failed to list events in namespace "nettest-9573": Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/nettest-9573/events": dial tcp 100.26.139.144:443: connect: connection refused In [DeferCleanup (Each)] at: test/e2e/framework/debug/dump.go:44 ---------- [FAILED] Jan 20 17:15:48.688: Couldn't delete ns: "nettest-9573": Delete "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/nettest-9573": dial tcp 100.26.139.144:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/nettest-9573", Err:(*net.OpError)(0xc004ee7900)}) In [DeferCleanup (Each)] at: test/e2e/framework/framework.go:370from junit_01.xml
[BeforeEach] [sig-network] Networking set up framework | framework.go:178 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/20/23 17:15:25.776�[0m Jan 20 17:15:25.776: INFO: >>> kubeConfig: /root/.kube/config �[1mSTEP:�[0m Building a namespace api object, basename nettest �[38;5;243m01/20/23 17:15:25.777�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/20/23 17:15:25.875�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/20/23 17:15:25.939�[0m [BeforeEach] [sig-network] Networking test/e2e/framework/metrics/init/init.go:31 [It] should function for endpoint-Service: udp test/e2e/network/networking.go:250 �[1mSTEP:�[0m Performing setup for networking test in namespace nettest-9573 �[38;5;243m01/20/23 17:15:26.002�[0m �[1mSTEP:�[0m creating a selector �[38;5;243m01/20/23 17:15:26.002�[0m �[1mSTEP:�[0m Creating the service pods in kubernetes �[38;5;243m01/20/23 17:15:26.003�[0m Jan 20 17:15:26.003: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Jan 20 17:15:26.232: INFO: Waiting up to 5m0s for pod "netserver-0" in namespace "nettest-9573" to be "running and ready" Jan 20 17:15:26.265: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 33.085677ms Jan 20 17:15:26.265: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 20 17:15:48.563: INFO: Encountered non-retryable error while getting pod nettest-9573/netserver-0: Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/nettest-9573/pods/netserver-0": dial tcp 100.26.139.144:443: connect: connection refused - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=519, ErrCode=NO_ERROR, debug="" Jan 20 17:15:48.563: INFO: Unexpected error: <*fmt.wrapError | 0xc0035ef940>: { msg: "error while waiting for pod nettest-9573/netserver-0 to be running and ready: Get \"https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/nettest-9573/pods/netserver-0\": dial tcp 100.26.139.144:443: connect: connection refused - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=519, ErrCode=NO_ERROR, debug=\"\"", err: <*rest.wrapPreviousError | 0xc0035ef920>{ currentErr: <*url.Error | 0xc002dca720>{ Op: "Get", URL: "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/nettest-9573/pods/netserver-0", Err: <*net.OpError | 0xc0047d1590>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc00338b6e0>{ IP: [100, 26, 139, 144], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc0035ef8e0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, }, previousError: <http2.GoAwayError>{LastStreamID: 519, ErrCode: 0, DebugData: ""}, }, } Jan 20 17:15:48.563: FAIL: error while waiting for pod nettest-9573/netserver-0 to be running and ready: Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/nettest-9573/pods/netserver-0": dial tcp 100.26.139.144:443: connect: connection refused - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=519, ErrCode=NO_ERROR, debug="" Full Stack Trace k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).createNetProxyPods(0xc005256000, {0x75cb852, 0x9}, 0xc0050a1080) test/e2e/framework/network/utils.go:866 +0x1d0 k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setupCore(0xc005256000, 0x7fb380595188?) test/e2e/framework/network/utils.go:763 +0x55 k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setup(0xc005256000, 0x3e?) test/e2e/framework/network/utils.go:778 +0x3e k8s.io/kubernetes/test/e2e/framework/network.NewNetworkingTestConfig(0xc000ea4690, {0x0, 0x0, 0x7fb3aaec05b8?}) test/e2e/framework/network/utils.go:131 +0x125 k8s.io/kubernetes/test/e2e/network.glob..func22.6.8() test/e2e/network/networking.go:251 +0x36 [AfterEach] [sig-network] Networking test/e2e/framework/node/init/init.go:32 Jan 20 17:15:48.563: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-network] Networking test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-network] Networking dump namespaces | framework.go:196 �[1mSTEP:�[0m dump namespace information after failure �[38;5;243m01/20/23 17:15:48.603�[0m �[1mSTEP:�[0m Collecting events from namespace "nettest-9573". �[38;5;243m01/20/23 17:15:48.603�[0m Jan 20 17:15:48.643: INFO: Unexpected error: failed to list events in namespace "nettest-9573": <*url.Error | 0xc002dcbfb0>: { Op: "Get", URL: "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/nettest-9573/events", Err: <*net.OpError | 0xc0047d1ea0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc002ab51a0>{ IP: [100, 26, 139, 144], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc0002c80e0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Jan 20 17:15:48.644: FAIL: failed to list events in namespace "nettest-9573": Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/nettest-9573/events": dial tcp 100.26.139.144:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework/debug.dumpEventsInNamespace(0xc00286a5c0, {0xc00487e180, 0xc}) test/e2e/framework/debug/dump.go:44 +0x191 k8s.io/kubernetes/test/e2e/framework/debug.DumpAllNamespaceInfo({0x8022ee8, 0xc00229f040}, {0xc00487e180, 0xc}) test/e2e/framework/debug/dump.go:62 +0x8d k8s.io/kubernetes/test/e2e/framework/debug/init.init.0.func1.1(0xc00286a650?, {0xc00487e180?, 0x7fac780?}) test/e2e/framework/debug/init/init.go:34 +0x32 k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo.func1() test/e2e/framework/framework.go:274 +0x6d k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo(0xc000ea4690) test/e2e/framework/framework.go:271 +0x179 reflect.Value.call({0x662c060?, 0xc00114e4a0?, 0x13?}, {0x75bb752, 0x4}, {0xae7b420, 0x0, 0x0?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x662c060?, 0xc00114e4a0?, 0x2946afc?}, {0xae7b420?, 0xc0006b1780?, 0xc00229f040?}) /usr/local/go/src/reflect/value.go:368 +0xbc [DeferCleanup (Each)] [sig-network] Networking tear down framework | framework.go:193 �[1mSTEP:�[0m Destroying namespace "nettest-9573" for this suite. �[38;5;243m01/20/23 17:15:48.644�[0m Jan 20 17:15:48.688: FAIL: Couldn't delete ns: "nettest-9573": Delete "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/nettest-9573": dial tcp 100.26.139.144:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/nettest-9573", Err:(*net.OpError)(0xc004ee7900)}) Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach.func1() test/e2e/framework/framework.go:370 +0x4fe k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc000ea4690) test/e2e/framework/framework.go:383 +0x1ca reflect.Value.call({0x662c060?, 0xc00114e3e0?, 0x13?}, {0x75bb752, 0x4}, {0xae7b420, 0x0, 0x0?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x662c060?, 0xc00114e3e0?, 0xc0050a0900?}, {0xae7b420?, 0x39ef9a0?, 0xc00174fd10?}) /usr/local/go/src/reflect/value.go:368 +0xbc
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-network\]\sNetworking\sGranular\sChecks\:\sServices\sshould\sfunction\sfor\snode\-Service\:\shttp$'
test/e2e/framework/network/utils.go:866 k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).createNetProxyPods(0xc000bf0a80, {0x75cb852, 0x9}, 0xc003cfe780) test/e2e/framework/network/utils.go:866 +0x1d0 k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setupCore(0xc000bf0a80, 0x7f5f30576cd8?) test/e2e/framework/network/utils.go:763 +0x55 k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setup(0xc000bf0a80, 0x3d?) test/e2e/framework/network/utils.go:778 +0x3e k8s.io/kubernetes/test/e2e/framework/network.NewNetworkingTestConfig(0xc000cf2690, {0xc00463ff20, 0x1, 0x0?}) test/e2e/framework/network/utils.go:131 +0x125 k8s.io/kubernetes/test/e2e/network.glob..func22.6.4() test/e2e/network/networking.go:193 +0x51from junit_01.xml
[BeforeEach] [sig-network] Networking set up framework | framework.go:178 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/20/23 17:17:17.501�[0m Jan 20 17:17:17.501: INFO: >>> kubeConfig: /root/.kube/config �[1mSTEP:�[0m Building a namespace api object, basename nettest �[38;5;243m01/20/23 17:17:17.502�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/20/23 17:17:17.593�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/20/23 17:17:17.658�[0m [BeforeEach] [sig-network] Networking test/e2e/framework/metrics/init/init.go:31 [It] should function for node-Service: http test/e2e/network/networking.go:192 �[1mSTEP:�[0m Performing setup for networking test in namespace nettest-712 �[38;5;243m01/20/23 17:17:17.715�[0m �[1mSTEP:�[0m creating a selector �[38;5;243m01/20/23 17:17:17.715�[0m �[1mSTEP:�[0m Creating the service pods in kubernetes �[38;5;243m01/20/23 17:17:17.715�[0m Jan 20 17:17:17.716: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Jan 20 17:17:17.918: INFO: Waiting up to 5m0s for pod "netserver-0" in namespace "nettest-712" to be "running and ready" Jan 20 17:17:17.951: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 32.641893ms Jan 20 17:17:17.951: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 20 17:17:19.981: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062645147s Jan 20 17:17:19.981: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 20 17:17:21.981: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.062269132s Jan 20 17:17:21.981: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 20 17:17:23.982: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.063206376s Jan 20 17:17:23.982: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 20 17:17:25.981: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 8.062945764s Jan 20 17:17:25.981: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 20 17:17:27.980: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 10.06162784s Jan 20 17:17:27.980: INFO: The phase of Pod netserver-0 is Running (Ready = false) Jan 20 17:17:29.980: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 12.06205636s Jan 20 17:17:29.980: INFO: The phase of Pod netserver-0 is Running (Ready = false) Jan 20 17:17:31.980: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 14.061952017s Jan 20 17:17:31.980: INFO: The phase of Pod netserver-0 is Running (Ready = false) Jan 20 17:17:33.981: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 16.062642201s Jan 20 17:17:33.981: INFO: The phase of Pod netserver-0 is Running (Ready = false) Jan 20 17:17:35.981: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 18.062460246s Jan 20 17:17:35.981: INFO: The phase of Pod netserver-0 is Running (Ready = false) Jan 20 17:17:37.981: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 20.06237493s Jan 20 17:17:37.981: INFO: The phase of Pod netserver-0 is Running (Ready = false) Jan 20 17:17:39.982: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=true. Elapsed: 22.063661288s Jan 20 17:17:39.982: INFO: The phase of Pod netserver-0 is Running (Ready = true) Jan 20 17:17:39.982: INFO: Pod "netserver-0" satisfied condition "running and ready" Jan 20 17:17:40.011: INFO: Waiting up to 5m0s for pod "netserver-1" in namespace "nettest-712" to be "running and ready" Jan 20 17:17:40.040: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=true. Elapsed: 28.846032ms Jan 20 17:17:40.040: INFO: The phase of Pod netserver-1 is Running (Ready = true) Jan 20 17:17:40.040: INFO: Pod "netserver-1" satisfied condition "running and ready" Jan 20 17:17:40.069: INFO: Waiting up to 5m0s for pod "netserver-2" in namespace "nettest-712" to be "running and ready" Jan 20 17:17:40.098: INFO: Pod "netserver-2": Phase="Failed", Reason="Terminated", readiness=false. Elapsed: 28.890391ms Jan 20 17:17:40.098: INFO: The phase of Pod netserver-2 is Failed which is unexpected, pod status: v1.PodStatus{Phase:"Failed", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"DisruptionTarget", Status:"True", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2023, time.January, 20, 17, 17, 26, 0, time.Local), Reason:"TerminationByKubelet", Message:"Pod was terminated in response to imminent node shutdown."}, v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2023, time.January, 20, 17, 17, 17, 0, time.Local), Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2023, time.January, 20, 17, 17, 17, 0, time.Local), Reason:"PodFailed", Message:""}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2023, time.January, 20, 17, 17, 17, 0, time.Local), Reason:"PodFailed", Message:""}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2023, time.January, 20, 17, 17, 17, 0, time.Local), Reason:"", Message:""}}, Message:"Pod was terminated in response to imminent node shutdown.", Reason:"Terminated", NominatedNodeName:"", HostIP:"172.20.41.86", PodIP:"100.96.1.61", PodIPs:[]v1.PodIP{v1.PodIP{IP:"100.96.1.61"}}, StartTime:time.Date(2023, time.January, 20, 17, 17, 17, 0, time.Local), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"webserver", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000d2e9a0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"registry.k8s.io/e2e-test-images/agnhost:2.43", ImageID:"registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e", ContainerID:"containerd://4a5eddeb7aae48102e92be1e573ebe35d2e48d4b5041d31db76d7387fbbbc43d", Started:(*bool)(0xc004db6fa9)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)} Jan 20 17:17:40.098: INFO: Error evaluating pod condition running and ready: final error: pod failed permanently Jan 20 17:17:40.098: INFO: Unexpected error: <*fmt.wrapError | 0xc0011247c0>: { msg: "error while waiting for pod nettest-712/netserver-2 to be running and ready: final error: pod failed permanently", err: <*pod.FinalErr | 0xc00044f460>{ Err: <*errors.errorString | 0xc00044f450>{ s: "pod failed permanently", }, }, } Jan 20 17:17:40.098: FAIL: error while waiting for pod nettest-712/netserver-2 to be running and ready: final error: pod failed permanently Full Stack Trace k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).createNetProxyPods(0xc000bf0a80, {0x75cb852, 0x9}, 0xc003cfe780) test/e2e/framework/network/utils.go:866 +0x1d0 k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setupCore(0xc000bf0a80, 0x7f5f30576cd8?) test/e2e/framework/network/utils.go:763 +0x55 k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setup(0xc000bf0a80, 0x3d?) test/e2e/framework/network/utils.go:778 +0x3e k8s.io/kubernetes/test/e2e/framework/network.NewNetworkingTestConfig(0xc000cf2690, {0xc00463ff20, 0x1, 0x0?}) test/e2e/framework/network/utils.go:131 +0x125 k8s.io/kubernetes/test/e2e/network.glob..func22.6.4() test/e2e/network/networking.go:193 +0x51 [AfterEach] [sig-network] Networking test/e2e/framework/node/init/init.go:32 Jan 20 17:17:40.099: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready Jan 20 17:17:40.130: INFO: Condition Ready of node i-048afc59cd0c5fa4a is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-20 17:17:19 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-20 17:17:23 +0000 UTC}]. Failure Jan 20 17:17:42.162: INFO: Condition Ready of node i-048afc59cd0c5fa4a is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-20 17:17:19 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-20 17:17:23 +0000 UTC}]. Failure Jan 20 17:17:44.161: INFO: Condition Ready of node i-048afc59cd0c5fa4a is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-20 17:17:19 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-20 17:17:23 +0000 UTC}]. Failure Jan 20 17:17:46.162: INFO: Condition Ready of node i-048afc59cd0c5fa4a is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-20 17:17:19 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-20 17:17:23 +0000 UTC}]. Failure Jan 20 17:17:48.161: INFO: Condition Ready of node i-048afc59cd0c5fa4a is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-20 17:17:19 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-20 17:17:23 +0000 UTC}]. Failure Jan 20 17:17:50.162: INFO: Condition Ready of node i-048afc59cd0c5fa4a is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-20 17:17:19 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-20 17:17:23 +0000 UTC}]. Failure Jan 20 17:17:52.166: INFO: Condition Ready of node i-03af3dbca738ba168 is false instead of true. Reason: KubeletNotReady, message: node is shutting down Jan 20 17:17:52.166: INFO: Condition Ready of node i-048afc59cd0c5fa4a is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-20 17:17:19 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-20 17:17:23 +0000 UTC}]. Failure Jan 20 17:17:54.162: INFO: Condition Ready of node i-03af3dbca738ba168 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-20 17:17:50 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-20 17:17:53 +0000 UTC}]. Failure Jan 20 17:17:54.162: INFO: Condition Ready of node i-048afc59cd0c5fa4a is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-20 17:17:19 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-20 17:17:23 +0000 UTC}]. Failure Jan 20 17:17:56.167: INFO: Condition Ready of node i-03af3dbca738ba168 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-20 17:17:50 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-20 17:17:53 +0000 UTC}]. Failure Jan 20 17:17:56.167: INFO: Condition Ready of node i-048afc59cd0c5fa4a is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-20 17:17:19 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-20 17:17:23 +0000 UTC}]. Failure Jan 20 17:17:58.164: INFO: Condition Ready of node i-03af3dbca738ba168 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-20 17:17:50 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-20 17:17:53 +0000 UTC}]. Failure Jan 20 17:17:58.164: INFO: Condition Ready of node i-048afc59cd0c5fa4a is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-20 17:17:19 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-20 17:17:23 +0000 UTC}]. Failure Jan 20 17:18:00.161: INFO: Condition Ready of node i-03af3dbca738ba168 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-20 17:17:50 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-20 17:17:53 +0000 UTC}]. Failure Jan 20 17:18:00.161: INFO: Condition Ready of node i-048afc59cd0c5fa4a is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-20 17:17:19 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-20 17:17:23 +0000 UTC}]. Failure Jan 20 17:18:02.167: INFO: Condition Ready of node i-03af3dbca738ba168 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-20 17:17:50 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-20 17:17:53 +0000 UTC}]. Failure Jan 20 17:18:02.167: INFO: Condition Ready of node i-048afc59cd0c5fa4a is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-20 17:17:19 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-20 17:17:23 +0000 UTC}]. Failure Jan 20 17:18:04.161: INFO: Condition Ready of node i-03af3dbca738ba168 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-20 17:17:50 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-20 17:17:53 +0000 UTC}]. Failure Jan 20 17:18:04.161: INFO: Condition Ready of node i-048afc59cd0c5fa4a is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-20 17:17:19 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-20 17:17:23 +0000 UTC}]. Failure Jan 20 17:18:06.161: INFO: Condition Ready of node i-03af3dbca738ba168 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-20 17:17:50 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-20 17:17:53 +0000 UTC}]. Failure Jan 20 17:18:06.161: INFO: Condition Ready of node i-048afc59cd0c5fa4a is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-20 17:17:19 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-20 17:17:23 +0000 UTC}]. Failure Jan 20 17:18:08.161: INFO: Condition Ready of node i-03af3dbca738ba168 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-20 17:17:50 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-20 17:17:53 +0000 UTC}]. Failure Jan 20 17:18:08.161: INFO: Condition Ready of node i-048afc59cd0c5fa4a is true, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoExecute 2023-01-20 17:17:23 +0000 UTC}]. Failure Jan 20 17:18:10.161: INFO: Condition Ready of node i-03af3dbca738ba168 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-20 17:17:50 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-20 17:17:53 +0000 UTC}]. Failure Jan 20 17:18:12.162: INFO: Condition Ready of node i-03af3dbca738ba168 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-20 17:17:50 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-20 17:17:53 +0000 UTC}]. Failure Jan 20 17:18:14.162: INFO: Condition Ready of node i-03af3dbca738ba168 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-20 17:17:50 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-20 17:17:53 +0000 UTC}]. Failure Jan 20 17:18:16.161: INFO: Condition Ready of node i-03af3dbca738ba168 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-20 17:17:50 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-20 17:17:53 +0000 UTC}]. Failure Jan 20 17:18:18.163: INFO: Condition Ready of node i-03af3dbca738ba168 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-20 17:17:50 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-20 17:17:53 +0000 UTC}]. Failure Jan 20 17:18:20.162: INFO: Condition Ready of node i-03af3dbca738ba168 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-20 17:17:50 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-20 17:17:53 +0000 UTC}]. Failure Jan 20 17:18:22.161: INFO: Condition Ready of node i-03af3dbca738ba168 is true, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoExecute 2023-01-20 17:17:53 +0000 UTC}]. Failure [DeferCleanup (Each)] [sig-network] Networking test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-network] Networking dump namespaces | framework.go:196 �[1mSTEP:�[0m dump namespace information after failure �[38;5;243m01/20/23 17:18:24.162�[0m �[1mSTEP:�[0m Collecting events from namespace "nettest-712". �[38;5;243m01/20/23 17:18:24.162�[0m �[1mSTEP:�[0m Found 20 events. �[38;5;243m01/20/23 17:18:24.192�[0m Jan 20 17:18:24.193: INFO: At 2023-01-20 17:17:17 +0000 UTC - event for netserver-0: {default-scheduler } Scheduled: Successfully assigned nettest-712/netserver-0 to i-03af3dbca738ba168 Jan 20 17:18:24.193: INFO: At 2023-01-20 17:17:17 +0000 UTC - event for netserver-1: {default-scheduler } Scheduled: Successfully assigned nettest-712/netserver-1 to i-0460dbd3e490039bb Jan 20 17:18:24.193: INFO: At 2023-01-20 17:17:17 +0000 UTC - event for netserver-2: {default-scheduler } Scheduled: Successfully assigned nettest-712/netserver-2 to i-048afc59cd0c5fa4a Jan 20 17:18:24.193: INFO: At 2023-01-20 17:17:17 +0000 UTC - event for netserver-3: {default-scheduler } Scheduled: Successfully assigned nettest-712/netserver-3 to i-0f775d321e19704c3 Jan 20 17:18:24.193: INFO: At 2023-01-20 17:17:18 +0000 UTC - event for netserver-1: {kubelet i-0460dbd3e490039bb} Created: Created container webserver Jan 20 17:18:24.193: INFO: At 2023-01-20 17:17:18 +0000 UTC - event for netserver-1: {kubelet i-0460dbd3e490039bb} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.43" already present on machine Jan 20 17:18:24.193: INFO: At 2023-01-20 17:17:18 +0000 UTC - event for netserver-2: {kubelet i-048afc59cd0c5fa4a} Created: Created container webserver Jan 20 17:18:24.193: INFO: At 2023-01-20 17:17:18 +0000 UTC - event for netserver-2: {kubelet i-048afc59cd0c5fa4a} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.43" already present on machine Jan 20 17:18:24.193: INFO: At 2023-01-20 17:17:18 +0000 UTC - event for netserver-2: {kubelet i-048afc59cd0c5fa4a} Started: Started container webserver Jan 20 17:18:24.193: INFO: At 2023-01-20 17:17:19 +0000 UTC - event for netserver-1: {kubelet i-0460dbd3e490039bb} Started: Started container webserver Jan 20 17:18:24.193: INFO: At 2023-01-20 17:17:19 +0000 UTC - event for netserver-3: {kubelet i-0f775d321e19704c3} Started: Started container webserver Jan 20 17:18:24.193: INFO: At 2023-01-20 17:17:19 +0000 UTC - event for netserver-3: {kubelet i-0f775d321e19704c3} Created: Created container webserver Jan 20 17:18:24.193: INFO: At 2023-01-20 17:17:19 +0000 UTC - event for netserver-3: {kubelet i-0f775d321e19704c3} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.43" already present on machine Jan 20 17:18:24.193: INFO: At 2023-01-20 17:17:20 +0000 UTC - event for netserver-0: {kubelet i-03af3dbca738ba168} Started: Started container webserver Jan 20 17:18:24.193: INFO: At 2023-01-20 17:17:20 +0000 UTC - event for netserver-0: {kubelet i-03af3dbca738ba168} Created: Created container webserver Jan 20 17:18:24.193: INFO: At 2023-01-20 17:17:20 +0000 UTC - event for netserver-0: {kubelet i-03af3dbca738ba168} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.43" already present on machine Jan 20 17:18:24.193: INFO: At 2023-01-20 17:17:20 +0000 UTC - event for netserver-2: {kubelet i-048afc59cd0c5fa4a} Killing: Stopping container webserver Jan 20 17:18:24.193: INFO: At 2023-01-20 17:17:50 +0000 UTC - event for netserver-0: {kubelet i-03af3dbca738ba168} Killing: Stopping container webserver Jan 20 17:18:24.193: INFO: At 2023-01-20 17:18:08 +0000 UTC - event for netserver-2: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod nettest-712/netserver-2 Jan 20 17:18:24.193: INFO: At 2023-01-20 17:18:23 +0000 UTC - event for netserver-0: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod nettest-712/netserver-0 Jan 20 17:18:24.224: INFO: POD NODE PHASE GRACE CONDITIONS Jan 20 17:18:24.224: INFO: netserver-0 i-03af3dbca738ba168 Failed [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-20 17:17:17 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-20 17:17:53 +0000 UTC PodFailed } {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-20 17:17:53 +0000 UTC PodFailed } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-20 17:17:17 +0000 UTC } {DisruptionTarget True 0001-01-01 00:00:00 +0000 UTC 2023-01-20 17:17:53 +0000 UTC TerminationByKubelet Pod was terminated in response to imminent node shutdown.}] Jan 20 17:18:24.224: INFO: netserver-1 i-0460dbd3e490039bb Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-20 17:17:17 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-01-20 17:17:38 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-20 17:17:38 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-20 17:17:17 +0000 UTC }] Jan 20 17:18:24.224: INFO: netserver-2 i-048afc59cd0c5fa4a Failed [{DisruptionTarget True 0001-01-01 00:00:00 +0000 UTC 2023-01-20 17:17:26 +0000 UTC TerminationByKubelet Pod was terminated in response to imminent node shutdown.} {Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-20 17:17:17 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-20 17:17:17 +0000 UTC PodFailed } {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-20 17:17:17 +0000 UTC PodFailed } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-20 17:17:17 +0000 UTC }] Jan 20 17:18:24.224: INFO: netserver-3 i-0f775d321e19704c3 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-20 17:17:17 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-01-20 17:17:29 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-20 17:17:29 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-20 17:17:17 +0000 UTC }] Jan 20 17:18:24.224: INFO: Jan 20 17:18:24.736: INFO: Logging node info for node i-02cae73514916eb60 Jan 20 17:18:24.765: INFO: Node Info: &Node{ObjectMeta:{i-02cae73514916eb60 6d0a8063-275e-4cb5-a7e1-ecf07fb2d810 6920 0 2023-01-20 17:06:32 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:c5.large beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:us-east-1 failure-domain.beta.kubernetes.io/zone:us-east-1a kops.k8s.io/kops-controller-pki: kubernetes.io/arch:amd64 kubernetes.io/hostname:i-02cae73514916eb60 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node.kubernetes.io/exclude-from-external-load-balancers: node.kubernetes.io/instance-type:c5.large topology.ebs.csi.aws.com/zone:us-east-1a topology.kubernetes.io/region:us-east-1 topology.kubernetes.io/zone:us-east-1a] map[csi.volume.kubernetes.io/nodeid:{"ebs.csi.aws.com":"i-02cae73514916eb60"} flannel.alpha.coreos.com/backend-data:{"VNI":1,"VtepMAC":"e6:28:1d:38:9c:ba"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:172.20.51.65 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-20 17:06:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}}} } {kube-controller-manager Update v1 2023-01-20 17:07:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.0.0/24\"":{}}}} } {protokube Update v1 2023-01-20 17:07:01 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kops.k8s.io/kops-controller-pki":{},"f:node-role.kubernetes.io/control-plane":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}} } {aws-cloud-controller-manager Update v1 2023-01-20 17:07:06 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:beta.kubernetes.io/instance-type":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:taints":{}}} } {aws-cloud-controller-manager Update v1 2023-01-20 17:07:06 +0000 UTC FieldsV1 {"f:status":{"f:addresses":{"k:{\"type\":\"ExternalDNS\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"ExternalIP\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"Hostname\"}":{"f:address":{}},"k:{\"type\":\"InternalDNS\"}":{".":{},"f:address":{},"f:type":{}}}}} status} {kubelet Update v1 2023-01-20 17:16:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{}}},"f:status":{"f:allocatable":{"f:memory":{}},"f:capacity":{"f:memory":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{},"f:kernelVersion":{},"f:osImage":{}}}} status} {flanneld Update v1 2023-01-20 17:16:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status}]},Spec:NodeSpec{PodCIDR:100.96.0.0/24,DoNotUseExternalID:,ProviderID:aws:///us-east-1a/i-02cae73514916eb60,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/control-plane,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[100.96.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{47441653760 0} {<nil>} 46329740Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3895427072 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{42697488314 0} {<nil>} 42697488314 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3790569472 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-20 17:16:23 +0000 UTC,LastTransitionTime:2023-01-20 17:16:23 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-20 17:16:15 +0000 UTC,LastTransitionTime:2023-01-20 17:06:30 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-20 17:16:15 +0000 UTC,LastTransitionTime:2023-01-20 17:06:30 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-20 17:16:15 +0000 UTC,LastTransitionTime:2023-01-20 17:06:30 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-20 17:16:15 +0000 UTC,LastTransitionTime:2023-01-20 17:16:15 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.51.65,},NodeAddress{Type:ExternalIP,Address:100.26.139.144,},NodeAddress{Type:InternalDNS,Address:i-02cae73514916eb60.ec2.internal,},NodeAddress{Type:Hostname,Address:i-02cae73514916eb60.ec2.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-100-26-139-144.compute-1.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec26604dd0f376256ae652d6e661c235,SystemUUID:ec26604d-d0f3-7625-6ae6-52d6e661c235,BootID:a089a900-b2da-4d1d-8de1-3fdf21e97305,KernelVersion:5.15.86-flatcar,OSImage:Flatcar Container Linux by Kinvolk 3446.1.0 (Oklo),ContainerRuntimeVersion:containerd://1.6.10,KubeletVersion:v1.26.1,KubeProxyVersion:v1.26.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/etcdadm/etcd-manager@sha256:66a453db625abb268f4b3bbefc5a34a171d81e6e8796cecca54cfd71775c77c4 registry.k8s.io/etcdadm/etcd-manager:v3.0.20221209],SizeBytes:231502799,},ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.26.1],SizeBytes:135178704,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.26.1],SizeBytes:124995897,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.26.1],SizeBytes:67205316,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.26.1],SizeBytes:57661752,},ContainerImage{Names:[registry.k8s.io/kops/kops-controller:1.26.0-beta.2],SizeBytes:43191763,},ContainerImage{Names:[registry.k8s.io/kops/dns-controller:1.26.0-beta.2],SizeBytes:42821714,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:f0c5de192d832e7c1daa6580d4a62e8fa6fc8eabc0917ae4cb7ed4d15e95b59e registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.1],SizeBytes:29725845,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:122bfb8c1edabb3c0edd63f06523e6940d958d19b3957dc7b1d6f81e9f1f6119 registry.k8s.io/sig-storage/csi-provisioner:v3.1.0],SizeBytes:23345856,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:9ebbf9f023e7b41ccee3d52afe39a89e3ddacdbb69269d583abfc25847cfd9e4 registry.k8s.io/sig-storage/csi-resizer:v1.4.0],SizeBytes:22381475,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:8b9c313c05f54fb04f8d430896f5f5904b6cb157df261501b29adc04d2b2dc7b registry.k8s.io/sig-storage/csi-attacher:v3.4.0],SizeBytes:22085298,},ContainerImage{Names:[docker.io/rancher/mirrored-flannelcni-flannel@sha256:c9786f434d4663c924aeca1a2e479786d63df0d56c5d6bd62a64915f81d62ff0 docker.io/rancher/mirrored-flannelcni-flannel:v0.19.2],SizeBytes:20503771,},ContainerImage{Names:[registry.k8s.io/provider-aws/cloud-controller-manager@sha256:fdeb61e3e42ecd9cca868d550ebdb88dd6341d9e91fcfa9a37e227dab2ad22cb registry.k8s.io/provider-aws/cloud-controller-manager:v1.26.0],SizeBytes:20154862,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:8240918,},ContainerImage{Names:[registry.k8s.io/kops/kube-apiserver-healthcheck:1.26.0-beta.2],SizeBytes:4965793,},ContainerImage{Names:[docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin@sha256:28d3a6be9f450282bf42e4dad143d41da23e3d91f66f19c01ee7fd21fd17cb2b docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.0],SizeBytes:3821285,},ContainerImage{Names:[registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 20 17:18:24.766: INFO: Logging kubelet events for node i-02cae73514916eb60 Jan 20 17:18:24.800: INFO: Logging pods the kubelet thinks is on node i-02cae73514916eb60 Jan 20 17:18:24.856: INFO: etcd-manager-events-i-02cae73514916eb60 started at 2023-01-20 17:16:08 +0000 UTC (0+1 container statuses recorded) Jan 20 17:18:24.856: INFO: Container etcd-manager ready: true, restart count 1 Jan 20 17:18:24.856: INFO: kube-scheduler-i-02cae73514916eb60 started at 2023-01-20 17:16:08 +0000 UTC (0+1 container statuses recorded) Jan 20 17:18:24.856: INFO: Container kube-scheduler ready: true, restart count 1 Jan 20 17:18:24.856: INFO: ebs-csi-node-lfls8 started at 2023-01-20 17:06:58 +0000 UTC (0+3 container statuses recorded) Jan 20 17:18:24.856: INFO: Container ebs-plugin ready: true, restart count 1 Jan 20 17:18:24.856: INFO: Container liveness-probe ready: true, restart count 1 Jan 20 17:18:24.856: INFO: Container node-driver-registrar ready: true, restart count 1 Jan 20 17:18:24.856: INFO: kube-flannel-ds-5nkqq started at 2023-01-20 17:06:58 +0000 UTC (2+1 container statuses recorded) Jan 20 17:18:24.856: INFO: Init container install-cni-plugin ready: true, restart count 1 Jan 20 17:18:24.856: INFO: Init container install-cni ready: true, restart count 0 Jan 20 17:18:24.856: INFO: Container kube-flannel ready: true, restart count 1 Jan 20 17:18:24.856: INFO: kops-controller-mqtlq started at 2023-01-20 17:07:01 +0000 UTC (0+1 container statuses recorded) Jan 20 17:18:24.856: INFO: Container kops-controller ready: true, restart count 2 Jan 20 17:18:24.856: INFO: etcd-manager-main-i-02cae73514916eb60 started at 2023-01-20 17:16:08 +0000 UTC (0+1 container statuses recorded) Jan 20 17:18:24.856: INFO: Container etcd-manager ready: true, restart count 1 Jan 20 17:18:24.856: INFO: kube-apiserver-i-02cae73514916eb60 started at 2023-01-20 17:16:08 +0000 UTC (0+2 container statuses recorded) Jan 20 17:18:24.856: INFO: Container healthcheck ready: true, restart count 1 Jan 20 17:18:24.856: INFO: Container kube-apiserver ready: true, restart count 2 Jan 20 17:18:24.856: INFO: kube-controller-manager-i-02cae73514916eb60 started at 2023-01-20 17:06:00 +0000 UTC (0+1 container statuses recorded) Jan 20 17:18:24.856: INFO: Container kube-controller-manager ready: true, restart count 4 Jan 20 17:18:24.856: INFO: kube-proxy-i-02cae73514916eb60 started at 2023-01-20 17:16:08 +0000 UTC (0+1 container statuses recorded) Jan 20 17:18:24.856: INFO: Container kube-proxy ready: true, restart count 1 Jan 20 17:18:24.856: INFO: dns-controller-74d4646d88-p7zxr started at 2023-01-20 17:07:01 +0000 UTC (0+1 container statuses recorded) Jan 20 17:18:24.856: INFO: Container dns-controller ready: true, restart count 1 Jan 20 17:18:24.856: INFO: ebs-csi-controller-c9fc69cf5-kn566 started at 2023-01-20 17:07:01 +0000 UTC (0+5 container statuses recorded) Jan 20 17:18:24.856: INFO: Container csi-attacher ready: true, restart count 2 Jan 20 17:18:24.856: INFO: Container csi-provisioner ready: true, restart count 2 Jan 20 17:18:24.856: INFO: Container csi-resizer ready: true, restart count 1 Jan 20 17:18:24.856: INFO: Container ebs-plugin ready: true, restart count 1 Jan 20 17:18:24.856: INFO: Container liveness-probe ready: true, restart count 1 Jan 20 17:18:24.856: INFO: aws-cloud-controller-manager-2qgs4 started at 2023-01-20 17:07:01 +0000 UTC (0+1 container statuses recorded) Jan 20 17:18:24.856: INFO: Container aws-cloud-controller-manager ready: true, restart count 2 Jan 20 17:18:25.054: INFO: Latency metrics for node i-02cae73514916eb60 Jan 20 17:18:25.054: INFO: Logging node info for node i-03af3dbca738ba168 Jan 20 17:18:25.125: INFO: Node Info: &Node{ObjectMeta:{i-03af3dbca738ba168 f2b83166-36e9-4e14-8fe3-7e4da5f5a758 8971 0 2023-01-20 17:07:52 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:us-east-1 failure-domain.beta.kubernetes.io/zone:us-east-1a kubernetes.io/arch:amd64 kubernetes.io/hostname:i-03af3dbca738ba168 kubernetes.io/os:linux node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:us-east-1a topology.hostpath.csi/node:i-03af3dbca738ba168 topology.kubernetes.io/region:us-east-1 topology.kubernetes.io/zone:us-east-1a] map[flannel.alpha.coreos.com/backend-data:{"VNI":1,"VtepMAC":"26:10:99:e2:a4:c5"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:172.20.58.114 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{aws-cloud-controller-manager Update v1 2023-01-20 17:07:52 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:beta.kubernetes.io/instance-type":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {aws-cloud-controller-manager Update v1 2023-01-20 17:07:52 +0000 UTC FieldsV1 {"f:status":{"f:addresses":{"k:{\"type\":\"ExternalDNS\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"ExternalIP\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"Hostname\"}":{"f:address":{}},"k:{\"type\":\"InternalDNS\"}":{".":{},"f:address":{},"f:type":{}}}}} status} {kops-controller Update v1 2023-01-20 17:07:52 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:node-role.kubernetes.io/node":{}}}} } {kubelet Update v1 2023-01-20 17:07:52 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}}} } {flanneld Update v1 2023-01-20 17:08:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2023-01-20 17:18:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.2.0/24\"":{}}}} } {kubelet Update v1 2023-01-20 17:18:21 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:topology.ebs.csi.aws.com/zone":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:allocatable":{"f:memory":{}},"f:capacity":{"f:memory":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{},"f:kernelVersion":{},"f:osImage":{}}}} status}]},Spec:NodeSpec{PodCIDR:100.96.2.0/24,DoNotUseExternalID:,ProviderID:aws:///us-east-1a/i-03af3dbca738ba168,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{47441653760 0} {<nil>} 46329740Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4054806528 0} {<nil>} 3959772Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{42697488314 0} {<nil>} 42697488314 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3949948928 0} {<nil>} 3857372Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-20 17:08:29 +0000 UTC,LastTransitionTime:2023-01-20 17:08:29 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-20 17:18:21 +0000 UTC,LastTransitionTime:2023-01-20 17:07:42 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-20 17:18:21 +0000 UTC,LastTransitionTime:2023-01-20 17:07:42 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-20 17:18:21 +0000 UTC,LastTransitionTime:2023-01-20 17:07:42 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-20 17:18:21 +0000 UTC,LastTransitionTime:2023-01-20 17:18:21 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.58.114,},NodeAddress{Type:ExternalIP,Address:54.92.220.56,},NodeAddress{Type:InternalDNS,Address:i-03af3dbca738ba168.ec2.internal,},NodeAddress{Type:Hostname,Address:i-03af3dbca738ba168.ec2.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-54-92-220-56.compute-1.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec2a474c9a9b98f9bdaf7a97ffdf305e,SystemUUID:ec2a474c-9a9b-98f9-bdaf-7a97ffdf305e,BootID:67cb1ab9-8c0f-4a0e-aa27-d7cde3225458,KernelVersion:5.15.86-flatcar,OSImage:Flatcar Container Linux by Kinvolk 3446.1.0 (Oklo),ContainerRuntimeVersion:containerd://1.6.10,KubeletVersion:v1.26.1,KubeProxyVersion:v1.26.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.26.1],SizeBytes:67205316,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:f0c5de192d832e7c1daa6580d4a62e8fa6fc8eabc0917ae4cb7ed4d15e95b59e registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.1],SizeBytes:29725845,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[docker.io/rancher/mirrored-flannelcni-flannel@sha256:c9786f434d4663c924aeca1a2e479786d63df0d56c5d6bd62a64915f81d62ff0 docker.io/rancher/mirrored-flannelcni-flannel:v0.19.2],SizeBytes:20503771,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf registry.k8s.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:18651485,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:8240918,},ContainerImage{Names:[docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin@sha256:28d3a6be9f450282bf42e4dad143d41da23e3d91f66f19c01ee7fd21fd17cb2b docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.0],SizeBytes:3821285,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 20 17:18:25.126: INFO: Logging kubelet events for node i-03af3dbca738ba168 Jan 20 17:18:25.178: INFO: Logging pods the kubelet thinks is on node i-03af3dbca738ba168 Jan 20 17:18:25.219: INFO: hostexec-i-03af3dbca738ba168-4rdz8 started at 2023-01-20 17:16:54 +0000 UTC (0+1 container statuses recorded) Jan 20 17:18:25.219: INFO: Container agnhost-container ready: false, restart count 0 Jan 20 17:18:25.219: INFO: hostexec-i-03af3dbca738ba168-48rhp started at 2023-01-20 17:17:12 +0000 UTC (0+1 container statuses recorded) Jan 20 17:18:25.219: INFO: Container agnhost-container ready: false, restart count 0 Jan 20 17:18:25.219: INFO: webserver-7f5969cbc7-vhj88 started at <nil> (0+0 container statuses recorded) Jan 20 17:18:25.219: INFO: kube-flannel-ds-6vmgt started at 2023-01-20 17:07:53 +0000 UTC (2+1 container statuses recorded) Jan 20 17:18:25.219: INFO: Init container install-cni-plugin ready: true, restart count 0 Jan 20 17:18:25.219: INFO: Init container install-cni ready: true, restart count 0 Jan 20 17:18:25.219: INFO: Container kube-flannel ready: false, restart count 1 Jan 20 17:18:25.219: INFO: boom-server started at 2023-01-20 17:14:35 +0000 UTC (0+1 container statuses recorded) Jan 20 17:18:25.219: INFO: Container boom-server ready: false, restart count 0 Jan 20 17:18:25.219: INFO: hostexec-i-03af3dbca738ba168-4lrj4 started at 2023-01-20 17:16:54 +0000 UTC (0+1 container statuses recorded) Jan 20 17:18:25.219: INFO: Container agnhost-container ready: false, restart count 0 Jan 20 17:18:25.219: INFO: netserver-0 started at 2023-01-20 17:17:17 +0000 UTC (0+1 container statuses recorded) Jan 20 17:18:25.219: INFO: Container webserver ready: true, restart count 0 Jan 20 17:18:25.219: INFO: webserver-7f5969cbc7-5c6sq started at <nil> (0+0 container statuses recorded) Jan 20 17:18:25.219: INFO: test-pod started at <nil> (0+0 container statuses recorded) Jan 20 17:18:25.219: INFO: kube-proxy-i-03af3dbca738ba168 started at 2023-01-20 17:07:42 +0000 UTC (0+1 container statuses recorded) Jan 20 17:18:25.219: INFO: Container kube-proxy ready: true, restart count 1 Jan 20 17:18:25.219: INFO: hostexec-i-03af3dbca738ba168-q6k7b started at 2023-01-20 17:14:55 +0000 UTC (0+1 container statuses recorded) Jan 20 17:18:25.219: INFO: Container agnhost-container ready: false, restart count 0 Jan 20 17:18:25.219: INFO: netserver-0 started at 2023-01-20 17:15:26 +0000 UTC (0+1 container statuses recorded) Jan 20 17:18:25.219: INFO: Container webserver ready: false, restart count 0 Jan 20 17:18:25.219: INFO: service-proxy-disabled-x6wst started at 2023-01-20 17:14:55 +0000 UTC (0+1 container statuses recorded) Jan 20 17:18:25.219: INFO: Container service-proxy-disabled ready: false, restart count 0 Jan 20 17:18:25.219: INFO: local-client started at 2023-01-20 17:17:47 +0000 UTC (0+1 container statuses recorded) Jan 20 17:18:25.219: INFO: Container local-client ready: true, restart count 0 Jan 20 17:18:25.219: INFO: ebs-csi-node-wmgfk started at 2023-01-20 17:18:21 +0000 UTC (0+3 container statuses recorded) Jan 20 17:18:25.219: INFO: Container ebs-plugin ready: false, restart count 0 Jan 20 17:18:25.219: INFO: Container liveness-probe ready: false, restart count 0 Jan 20 17:18:25.219: INFO: Container node-driver-registrar ready: false, restart count 0 Jan 20 17:18:25.219: INFO: hostexec-i-03af3dbca738ba168-9hfg2 started at 2023-01-20 17:17:09 +0000 UTC (0+1 container statuses recorded) Jan 20 17:18:25.219: INFO: Container agnhost-container ready: true, restart count 0 Jan 20 17:18:25.219: INFO: webserver-7f5969cbc7-pn9nr started at <nil> (0+0 container statuses recorded) Jan 20 17:18:25.219: INFO: coredns-559769c974-6f8t8 started at 2023-01-20 17:08:35 +0000 UTC (0+1 container statuses recorded) Jan 20 17:18:25.219: INFO: Container coredns ready: false, restart count 0 Jan 20 17:18:25.219: INFO: service-proxy-toggled-zghmz started at 2023-01-20 17:15:04 +0000 UTC (0+1 container statuses recorded) Jan 20 17:18:25.219: INFO: Container service-proxy-toggled ready: true, restart count 0 Jan 20 17:18:25.219: INFO: hostexec-i-03af3dbca738ba168-4qz69 started at 2023-01-20 17:15:13 +0000 UTC (0+1 container statuses recorded) Jan 20 17:18:25.219: INFO: Container agnhost-container ready: true, restart count 0 Jan 20 17:18:26.843: INFO: Latency metrics for node i-03af3dbca738ba168 Jan 20 17:18:26.843: INFO: Logging node info for node i-0460dbd3e490039bb Jan 20 17:18:26.875: INFO: Node Info: &Node{ObjectMeta:{i-0460dbd3e490039bb 3ed25acd-2f33-4687-a606-3d5a944590c8 8459 0 2023-01-20 17:07:53 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:us-east-1 failure-domain.beta.kubernetes.io/zone:us-east-1a kubernetes.io/arch:amd64 kubernetes.io/hostname:i-0460dbd3e490039bb kubernetes.io/os:linux node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:us-east-1a topology.hostpath.csi/node:i-0460dbd3e490039bb topology.kubernetes.io/region:us-east-1 topology.kubernetes.io/zone:us-east-1a] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-provisioning-6562":"i-0460dbd3e490039bb","ebs.csi.aws.com":"i-0460dbd3e490039bb"} flannel.alpha.coreos.com/backend-data:{"VNI":1,"VtepMAC":"0a:dc:21:c8:4e:3e"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:172.20.44.83 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{aws-cloud-controller-manager Update v1 2023-01-20 17:07:53 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:beta.kubernetes.io/instance-type":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {aws-cloud-controller-manager Update v1 2023-01-20 17:07:53 +0000 UTC FieldsV1 {"f:status":{"f:addresses":{"k:{\"type\":\"ExternalDNS\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"ExternalIP\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"Hostname\"}":{"f:address":{}},"k:{\"type\":\"InternalDNS\"}":{".":{},"f:address":{},"f:type":{}}}}} status} {kops-controller Update v1 2023-01-20 17:07:53 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:node-role.kubernetes.io/node":{}}}} } {kubelet Update v1 2023-01-20 17:07:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}}} } {kube-controller-manager Update v1 2023-01-20 17:10:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.3.0/24\"":{}}}} } {flanneld Update v1 2023-01-20 17:11:02 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2023-01-20 17:17:28 +0000 UTC FieldsV1 {"f:status":{"f:volumesAttached":{}}} status} {kubelet Update v1 2023-01-20 17:17:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{},"f:memory":{}},"f:capacity":{"f:memory":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{},"f:kernelVersion":{},"f:osImage":{}},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.3.0/24,DoNotUseExternalID:,ProviderID:aws:///us-east-1a/i-0460dbd3e490039bb,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{47441653760 0} {<nil>} 46329740Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4054806528 0} {<nil>} 3959772Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{42697488314 0} {<nil>} 42697488314 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3949948928 0} {<nil>} 3857372Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-20 17:11:02 +0000 UTC,LastTransitionTime:2023-01-20 17:11:02 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-20 17:17:28 +0000 UTC,LastTransitionTime:2023-01-20 17:07:32 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-20 17:17:28 +0000 UTC,LastTransitionTime:2023-01-20 17:07:32 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-20 17:17:28 +0000 UTC,LastTransitionTime:2023-01-20 17:07:32 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-20 17:17:28 +0000 UTC,LastTransitionTime:2023-01-20 17:10:57 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.44.83,},NodeAddress{Type:ExternalIP,Address:3.85.92.171,},NodeAddress{Type:InternalDNS,Address:i-0460dbd3e490039bb.ec2.internal,},NodeAddress{Type:Hostname,Address:i-0460dbd3e490039bb.ec2.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-3-85-92-171.compute-1.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec214ec8f7aec9bca6997e12c5d9fa17,SystemUUID:ec214ec8-f7ae-c9bc-a699-7e12c5d9fa17,BootID:6958a09a-b123-4522-ba50-97e69196d1e0,KernelVersion:5.15.86-flatcar,OSImage:Flatcar Container Linux by Kinvolk 3446.1.0 (Oklo),ContainerRuntimeVersion:containerd://1.6.10,KubeletVersion:v1.26.1,KubeProxyVersion:v1.26.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c registry.k8s.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.26.1],SizeBytes:67205316,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:f0c5de192d832e7c1daa6580d4a62e8fa6fc8eabc0917ae4cb7ed4d15e95b59e registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.1],SizeBytes:29725845,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[docker.io/rancher/mirrored-flannelcni-flannel@sha256:c9786f434d4663c924aeca1a2e479786d63df0d56c5d6bd62a64915f81d62ff0 docker.io/rancher/mirrored-flannelcni-flannel:v0.19.2],SizeBytes:20503771,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:8240918,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:5c99cf6a02adda929b10321dbf4ecfa00d87be9ba4fb456006237d530ab4baa1 registry.k8s.io/e2e-test-images/nginx:1.14-4],SizeBytes:6978614,},ContainerImage{Names:[docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin@sha256:28d3a6be9f450282bf42e4dad143d41da23e3d91f66f19c01ee7fd21fd17cb2b docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.0],SizeBytes:3821285,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:2110879,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[kubernetes.io/csi/ebs.csi.aws.com^vol-0bfed00e78ca4b211],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/ebs.csi.aws.com^vol-0bfed00e78ca4b211,DevicePath:,},},Config:nil,},} Jan 20 17:18:26.875: INFO: Logging kubelet events for node i-0460dbd3e490039bb Jan 20 17:18:26.913: INFO: Logging pods the kubelet thinks is on node i-0460dbd3e490039bb Jan 20 17:18:27.215: INFO: csi-hostpathplugin-0 started at 2023-01-20 17:17:07 +0000 UTC (0+7 container statuses recorded) Jan 20 17:18:27.215: INFO: Container csi-attacher ready: true, restart count 0 Jan 20 17:18:27.215: INFO: Container csi-provisioner ready: true, restart count 0 Jan 20 17:18:27.215: INFO: Container csi-resizer ready: true, restart count 0 Jan 20 17:18:27.215: INFO: Container csi-snapshotter ready: true, restart count 0 Jan 20 17:18:27.215: INFO: Container hostpath ready: true, restart count 0 Jan 20 17:18:27.215: INFO: Container liveness-probe ready: true, restart count 0 Jan 20 17:18:27.215: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 20 17:18:27.215: INFO: startup-04b7934a-c3e8-415c-ba2f-32e3d709e2f1 started at 2023-01-20 17:14:57 +0000 UTC (0+1 container statuses recorded) Jan 20 17:18:27.215: INFO: Container busybox ready: false, restart count 0 Jan 20 17:18:27.215: INFO: webserver-7f5969cbc7-4f87p started at <nil> (0+0 container statuses recorded) Jan 20 17:18:27.215: INFO: pod-d9b2c311-b86f-4135-a026-635f052e5073 started at 2023-01-20 17:15:13 +0000 UTC (0+1 container statuses recorded) Jan 20 17:18:27.215: INFO: Container write-pod ready: true, restart count 0 Jan 20 17:18:27.215: INFO: service-proxy-toggled-bvmzm started at 2023-01-20 17:15:04 +0000 UTC (0+1 container statuses recorded) Jan 20 17:18:27.215: INFO: Container service-proxy-toggled ready: true, restart count 0 Jan 20 17:18:27.215: INFO: pod-init-4f84b132-56e4-435b-a644-0dce661aa7aa started at 2023-01-20 17:17:05 +0000 UTC (2+1 container statuses recorded) Jan 20 17:18:27.215: INFO: Init container init1 ready: false, restart count 3 Jan 20 17:18:27.215: INFO: Init container init2 ready: false, restart count 0 Jan 20 17:18:27.215: INFO: Container run1 ready: false, restart count 0 Jan 20 17:18:27.215: INFO: pod-client started at 2023-01-20 17:18:25 +0000 UTC (0+1 container statuses recorded) Jan 20 17:18:27.215: INFO: Container pod-client ready: false, restart count 0 Jan 20 17:18:27.215: INFO: verify-service-down-host-exec-pod started at 2023-01-20 17:15:22 +0000 UTC (0+1 container statuses recorded) Jan 20 17:18:27.215: INFO: Container agnhost-container ready: true, restart count 0 Jan 20 17:18:27.215: INFO: simpletest.rc-jrszk started at 2023-01-20 17:15:26 +0000 UTC (0+1 container statuses recorded) Jan 20 17:18:27.215: INFO: Container nginx ready: true, restart count 0 Jan 20 17:18:27.215: INFO: service-proxy-disabled-hc668 started at 2023-01-20 17:14:55 +0000 UTC (0+1 container statuses recorded) Jan 20 17:18:27.215: INFO: Container service-proxy-disabled ready: true, restart count 0 Jan 20 17:18:27.215: INFO: netserver-1 started at 2023-01-20 17:17:17 +0000 UTC (0+1 container statuses recorded) Jan 20 17:18:27.215: INFO: Container webserver ready: true, restart count 0 Jan 20 17:18:27.215: INFO: hostexec-i-0460dbd3e490039bb-gl7xm started at 2023-01-20 17:15:20 +0000 UTC (0+1 container statuses recorded) Jan 20 17:18:27.215: INFO: Container agnhost-container ready: true, restart count 0 Jan 20 17:18:27.215: INFO: pfpod started at 2023-01-20 17:16:54 +0000 UTC (0+2 container statuses recorded) Jan 20 17:18:27.215: INFO: Container portforwardtester ready: false, restart count 0 Jan 20 17:18:27.215: INFO: Container readiness ready: false, restart count 0 Jan 20 17:18:27.215: INFO: pod-subpath-test-inlinevolume-npfl started at 2023-01-20 17:18:24 +0000 UTC (1+1 container statuses recorded) Jan 20 17:18:27.216: INFO: Init container init-volume-inlinevolume-npfl ready: false, restart count 0 Jan 20 17:18:27.216: INFO: Container test-container-subpath-inlinevolume-npfl ready: false, restart count 0 Jan 20 17:18:27.216: INFO: kube-flannel-ds-q8m2b started at 2023-01-20 17:07:53 +0000 UTC (2+1 container statuses recorded) Jan 20 17:18:27.216: INFO: Init container install-cni-plugin ready: true, restart count 1 Jan 20 17:18:27.216: INFO: Init container install-cni ready: true, restart count 0 Jan 20 17:18:27.216: INFO: Container kube-flannel ready: true, restart count 2 Jan 20 17:18:27.216: INFO: netserver-1 started at 2023-01-20 17:15:26 +0000 UTC (0+1 container statuses recorded) Jan 20 17:18:27.216: INFO: Container webserver ready: true, restart count 0 Jan 20 17:18:27.216: INFO: hostexec-i-0460dbd3e490039bb-4pv88 started at 2023-01-20 17:18:24 +0000 UTC (0+1 container statuses recorded) Jan 20 17:18:27.216: INFO: Container agnhost-container ready: false, restart count 0 Jan 20 17:18:27.216: INFO: test-grpc-46eabcb2-0c4a-4520-810d-ba498e0fcbea started at 2023-01-20 17:15:20 +0000 UTC (0+1 container statuses recorded) Jan 20 17:18:27.216: INFO: Container etcd ready: true, restart count 0 Jan 20 17:18:27.216: INFO: test-pod-1 started at 2023-01-20 17:15:25 +0000 UTC (0+1 container statuses recorded) Jan 20 17:18:27.216: INFO: Container token-test ready: true, restart count 0 Jan 20 17:18:27.216: INFO: kube-proxy-i-0460dbd3e490039bb started at 2023-01-20 17:07:33 +0000 UTC (0+1 container statuses recorded) Jan 20 17:18:27.216: INFO: Container kube-proxy ready: true, restart count 1 Jan 20 17:18:27.216: INFO: ebs-csi-node-kmj84 started at 2023-01-20 17:07:53 +0000 UTC (0+3 container statuses recorded) Jan 20 17:18:27.216: INFO: Container ebs-plugin ready: true, restart count 1 Jan 20 17:18:27.216: INFO: Container liveness-probe ready: true, restart count 1 Jan 20 17:18:27.216: INFO: Container node-driver-registrar ready: true, restart count 1 Jan 20 17:18:27.216: INFO: downwardapi-volume-65e507d7-2728-4f27-b145-837b0a794a2f started at 2023-01-20 17:15:24 +0000 UTC (0+1 container statuses recorded) Jan 20 17:18:27.216: INFO: Container client-container ready: false, restart count 0 Jan 20 17:18:27.445: INFO: Latency metrics for node i-0460dbd3e490039bb Jan 20 17:18:27.445: INFO: Logging node info for node i-048afc59cd0c5fa4a Jan 20 17:18:27.474: INFO: Node Info: &Node{ObjectMeta:{i-048afc59cd0c5fa4a 906bdaca-cfdb-4619-98d1-2751663efe41 9183 0 2023-01-20 17:07:51 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:us-east-1 failure-domain.beta.kubernetes.io/zone:us-east-1a kubernetes.io/arch:amd64 kubernetes.io/hostname:i-048afc59cd0c5fa4a kubernetes.io/os:linux node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:us-east-1a topology.hostpath.csi/node:i-048afc59cd0c5fa4a topology.kubernetes.io/region:us-east-1 topology.kubernetes.io/zone:us-east-1a] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-volumemode-4314":"i-048afc59cd0c5fa4a","csi-mock-csi-mock-volumes-3661":"i-048afc59cd0c5fa4a"} flannel.alpha.coreos.com/backend-data:{"VNI":1,"VtepMAC":"52:68:72:e8:79:3f"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:172.20.41.86 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{aws-cloud-controller-manager Update v1 2023-01-20 17:07:51 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:beta.kubernetes.io/instance-type":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {aws-cloud-controller-manager Update v1 2023-01-20 17:07:51 +0000 UTC FieldsV1 {"f:status":{"f:addresses":{"k:{\"type\":\"ExternalDNS\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"ExternalIP\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"Hostname\"}":{"f:address":{}},"k:{\"type\":\"InternalDNS\"}":{".":{},"f:address":{},"f:type":{}}}}} status} {kops-controller Update v1 2023-01-20 17:07:51 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:node-role.kubernetes.io/node":{}}}} } {kubelet Update v1 2023-01-20 17:07:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}}} } {kube-controller-manager Update v1 2023-01-20 17:18:07 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.1.0/24\"":{}}}} } {flanneld Update v1 2023-01-20 17:18:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2023-01-20 17:18:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:allocatable":{"f:memory":{}},"f:capacity":{"f:memory":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{},"f:kernelVersion":{},"f:osImage":{}}}} status}]},Spec:NodeSpec{PodCIDR:100.96.1.0/24,DoNotUseExternalID:,ProviderID:aws:///us-east-1a/i-048afc59cd0c5fa4a,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{47441653760 0} {<nil>} 46329740Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4054806528 0} {<nil>} 3959772Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{42697488314 0} {<nil>} 42697488314 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3949948928 0} {<nil>} 3857372Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-20 17:18:12 +0000 UTC,LastTransitionTime:2023-01-20 17:18:12 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-20 17:18:06 +0000 UTC,LastTransitionTime:2023-01-20 17:07:30 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-20 17:18:06 +0000 UTC,LastTransitionTime:2023-01-20 17:07:30 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-20 17:18:06 +0000 UTC,LastTransitionTime:2023-01-20 17:07:30 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-20 17:18:06 +0000 UTC,LastTransitionTime:2023-01-20 17:18:06 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.41.86,},NodeAddress{Type:ExternalIP,Address:34.201.135.194,},NodeAddress{Type:InternalDNS,Address:i-048afc59cd0c5fa4a.ec2.internal,},NodeAddress{Type:Hostname,Address:i-048afc59cd0c5fa4a.ec2.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-34-201-135-194.compute-1.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec2886bb32c49932d355813f2015452a,SystemUUID:ec2886bb-32c4-9932-d355-813f2015452a,BootID:c3c6217a-92a9-4cf1-a92f-5cf2a5908c35,KernelVersion:5.15.86-flatcar,OSImage:Flatcar Container Linux by Kinvolk 3446.1.0 (Oklo),ContainerRuntimeVersion:containerd://1.6.10,KubeletVersion:v1.26.1,KubeProxyVersion:v1.26.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.26.1],SizeBytes:67205316,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:f0c5de192d832e7c1daa6580d4a62e8fa6fc8eabc0917ae4cb7ed4d15e95b59e registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.1],SizeBytes:29725845,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[docker.io/rancher/mirrored-flannelcni-flannel@sha256:c9786f434d4663c924aeca1a2e479786d63df0d56c5d6bd62a64915f81d62ff0 docker.io/rancher/mirrored-flannelcni-flannel:v0.19.2],SizeBytes:20503771,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:68d396900aeaa072c1f27289485fdac29834045a6f3ffe369bf389d830ef572d registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.6],SizeBytes:20293261,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:8240918,},ContainerImage{Names:[docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin@sha256:28d3a6be9f450282bf42e4dad143d41da23e3d91f66f19c01ee7fd21fd17cb2b docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.0],SizeBytes:3821285,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 20 17:18:27.475: INFO: Logging kubelet events for node i-048afc59cd0c5fa4a Jan 20 17:18:27.508: INFO: Logging pods the kubelet thinks is on node i-048afc59cd0c5fa4a Jan 20 17:18:27.602: INFO: kube-proxy-i-048afc59cd0c5fa4a started at 2023-01-20 17:07:31 +0000 UTC (0+1 container statuses recorded) Jan 20 17:18:27.602: INFO: Container kube-proxy ready: true, restart count 1 Jan 20 17:18:27.602: INFO: coredns-559769c974-mkzlp started at 2023-01-20 17:07:54 +0000 UTC (0+1 container statuses recorded) Jan 20 17:18:27.602: INFO: Container coredns ready: true, restart count 1 Jan 20 17:18:27.602: INFO: csi-mockplugin-attacher-0 started at 2023-01-20 17:18:06 +0000 UTC (0+1 container statuses recorded) Jan 20 17:18:27.602: INFO: Container csi-attacher ready: true, restart count 0 Jan 20 17:18:27.602: INFO: csi-hostpathplugin-0 started at 2023-01-20 17:18:06 +0000 UTC (0+7 container statuses recorded) Jan 20 17:18:27.602: INFO: Container csi-attacher ready: false, restart count 0 Jan 20 17:18:27.602: INFO: Container csi-provisioner ready: false, restart count 0 Jan 20 17:18:27.602: INFO: Container csi-resizer ready: false, restart count 0 Jan 20 17:18:27.602: INFO: Container csi-snapshotter ready: false, restart count 0 Jan 20 17:18:27.602: INFO: Container hostpath ready: false, restart count 0 Jan 20 17:18:27.602: INFO: Container liveness-probe ready: false, restart count 0 Jan 20 17:18:27.602: INFO: Container node-driver-registrar ready: false, restart count 0 Jan 20 17:18:27.602: INFO: csi-mockplugin-0 started at 2023-01-20 17:18:06 +0000 UTC (0+3 container statuses recorded) Jan 20 17:18:27.602: INFO: Container csi-provisioner ready: true, restart count 0 Jan 20 17:18:27.602: INFO: Container driver-registrar ready: true, restart count 0 Jan 20 17:18:27.602: INFO: Container mock ready: true, restart count 0 Jan 20 17:18:27.602: INFO: netserver-2 started at 2023-01-20 17:15:26 +0000 UTC (0+1 container statuses recorded) Jan 20 17:18:27.602: INFO: Container webserver ready: false, restart count 0 Jan 20 17:18:27.602: INFO: csi-mockplugin-resizer-0 started at 2023-01-20 17:18:06 +0000 UTC (0+1 container statuses recorded) Jan 20 17:18:27.602: INFO: Container csi-resizer ready: true, restart count 0 Jan 20 17:18:27.602: INFO: webserver-7f5969cbc7-t7nfh started at 2023-01-20 17:18:24 +0000 UTC (0+1 container statuses recorded) Jan 20 17:18:27.602: INFO: Container httpd ready: false, restart count 0 Jan 20 17:18:27.602: INFO: pod-failure-failjob-q2ps6 started at <nil> (0+0 container statuses recorded) Jan 20 17:18:27.602: INFO: webserver-7f5969cbc7-r972d started at <nil> (0+0 container statuses recorded) Jan 20 17:18:27.602: INFO: csi-hostpathplugin-0 started at <nil> (0+0 container statuses recorded) Jan 20 17:18:27.602: INFO: coredns-autoscaler-7cb5c5b969-kxr22 started at 2023-01-20 17:07:54 +0000 UTC (0+1 container statuses recorded) Jan 20 17:18:27.602: INFO: Container autoscaler ready: false, restart count 0 Jan 20 17:18:27.602: INFO: ebs-csi-node-dkvln started at 2023-01-20 17:18:06 +0000 UTC (0+3 container statuses recorded) Jan 20 17:18:27.602: INFO: Container ebs-plugin ready: true, restart count 0 Jan 20 17:18:27.602: INFO: Container liveness-probe ready: true, restart count 0 Jan 20 17:18:27.602: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 20 17:18:27.602: INFO: kube-flannel-ds-nlnn2 started at 2023-01-20 17:18:06 +0000 UTC (2+1 container statuses recorded) Jan 20 17:18:27.602: INFO: Init container install-cni-plugin ready: true, restart count 0 Jan 20 17:18:27.602: INFO: Init container install-cni ready: true, restart count 0 Jan 20 17:18:27.602: INFO: Container kube-flannel ready: true, restart count 0 Jan 20 17:18:27.602: INFO: pod-failure-failjob-99sh8 started at 2023-01-20 17:18:25 +0000 UTC (0+1 container statuses recorded) Jan 20 17:18:27.602: INFO: Container c ready: false, restart count 0 Jan 20 17:18:27.602: INFO: hostexec-i-048afc59cd0c5fa4a-ml4fl started at <nil> (0+0 container statuses recorded) Jan 20 17:18:27.602: INFO: startup-script started at 2023-01-20 17:14:41 +0000 UTC (0+1 container statuses recorded) Jan 20 17:18:27.602: INFO: Container startup-script ready: false, restart count 0 Jan 20 17:18:27.602: INFO: netserver-2 started at 2023-01-20 17:17:17 +0000 UTC (0+1 container statuses recorded) Jan 20 17:18:27.602: INFO: Container webserver ready: false, restart count 0 Jan 20 17:18:28.300: INFO: Latency metrics for node i-048afc59cd0c5fa4a Jan 20 17:18:28.300: INFO: Logging node info for node i-0f775d321e19704c3 Jan 20 17:18:28.329: INFO: Node Info: &Node{ObjectMeta:{i-0f775d321e19704c3 19607256-f185-404f-84dd-0198c716bca7 8883 0 2023-01-20 17:07:53 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:us-east-1 failure-domain.beta.kubernetes.io/zone:us-east-1a kubernetes.io/arch:amd64 kubernetes.io/hostname:i-0f775d321e19704c3 kubernetes.io/os:linux node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:us-east-1a topology.hostpath.csi/node:i-0f775d321e19704c3 topology.kubernetes.io/region:us-east-1 topology.kubernetes.io/zone:us-east-1a] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-ephemeral-4295":"i-0f775d321e19704c3","ebs.csi.aws.com":"i-0f775d321e19704c3"} flannel.alpha.coreos.com/backend-data:{"VNI":1,"VtepMAC":"72:43:d6:40:e8:77"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:172.20.55.61 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-20 17:07:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}}} } {aws-cloud-controller-manager Update v1 2023-01-20 17:07:54 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:beta.kubernetes.io/instance-type":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {aws-cloud-controller-manager Update v1 2023-01-20 17:07:54 +0000 UTC FieldsV1 {"f:status":{"f:addresses":{"k:{\"type\":\"ExternalDNS\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"ExternalIP\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"Hostname\"}":{"f:address":{}},"k:{\"type\":\"InternalDNS\"}":{".":{},"f:address":{},"f:type":{}}}}} status} {kops-controller Update v1 2023-01-20 17:07:54 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:node-role.kubernetes.io/node":{}}}} } {kube-controller-manager Update v1 2023-01-20 17:09:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.4.0/24\"":{}}}} } {flanneld Update v1 2023-01-20 17:09:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2023-01-20 17:18:09 +0000 UTC FieldsV1 {"f:status":{"f:volumesAttached":{}}} status} {kubelet Update v1 2023-01-20 17:18:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{},"f:memory":{}},"f:capacity":{"f:memory":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{},"f:kernelVersion":{},"f:osImage":{}},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.4.0/24,DoNotUseExternalID:,ProviderID:aws:///us-east-1a/i-0f775d321e19704c3,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{47441653760 0} {<nil>} 46329740Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4054814720 0} {<nil>} 3959780Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{42697488314 0} {<nil>} 42697488314 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3949957120 0} {<nil>} 3857380Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-20 17:09:35 +0000 UTC,LastTransitionTime:2023-01-20 17:09:35 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-20 17:18:09 +0000 UTC,LastTransitionTime:2023-01-20 17:07:33 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-20 17:18:09 +0000 UTC,LastTransitionTime:2023-01-20 17:07:33 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-20 17:18:09 +0000 UTC,LastTransitionTime:2023-01-20 17:07:33 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-20 17:18:09 +0000 UTC,LastTransitionTime:2023-01-20 17:09:30 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.55.61,},NodeAddress{Type:ExternalIP,Address:3.93.201.229,},NodeAddress{Type:InternalDNS,Address:i-0f775d321e19704c3.ec2.internal,},NodeAddress{Type:Hostname,Address:i-0f775d321e19704c3.ec2.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-3-93-201-229.compute-1.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec2a4be20ed59f70fa8678b6d03004b4,SystemUUID:ec2a4be2-0ed5-9f70-fa86-78b6d03004b4,BootID:d3100caa-b833-4d03-b5c0-4cb4a8b87060,KernelVersion:5.15.86-flatcar,OSImage:Flatcar Container Linux by Kinvolk 3446.1.0 (Oklo),ContainerRuntimeVersion:containerd://1.6.10,KubeletVersion:v1.26.1,KubeProxyVersion:v1.26.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.26.1],SizeBytes:67205316,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:f0c5de192d832e7c1daa6580d4a62e8fa6fc8eabc0917ae4cb7ed4d15e95b59e registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.1],SizeBytes:29725845,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[docker.io/rancher/mirrored-flannelcni-flannel@sha256:c9786f434d4663c924aeca1a2e479786d63df0d56c5d6bd62a64915f81d62ff0 docker.io/rancher/mirrored-flannelcni-flannel:v0.19.2],SizeBytes:20503771,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:68d396900aeaa072c1f27289485fdac29834045a6f3ffe369bf389d830ef572d registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.6],SizeBytes:20293261,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:8240918,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:db048754ae68ae337d8fa96494c96d2a1204c3320f5dcf7e8e71085adec85da6 registry.k8s.io/e2e-test-images/nginx:1.15-4],SizeBytes:7000082,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:5c99cf6a02adda929b10321dbf4ecfa00d87be9ba4fb456006237d530ab4baa1 registry.k8s.io/e2e-test-images/nginx:1.14-4],SizeBytes:6978614,},ContainerImage{Names:[docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin@sha256:28d3a6be9f450282bf42e4dad143d41da23e3d91f66f19c01ee7fd21fd17cb2b docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.0],SizeBytes:3821285,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[kubernetes.io/csi/csi-mock-csi-mock-volumes-5223^c8620916-98e5-11ed-99ff-464f19649f33],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-mock-csi-mock-volumes-5223^c8620916-98e5-11ed-99ff-464f19649f33,DevicePath:,},},Config:nil,},} Jan 20 17:18:28.329: INFO: Logging kubelet events for node i-0f775d321e19704c3 Jan 20 17:18:28.366: INFO: Logging pods the kubelet thinks is on node i-0f775d321e19704c3 Jan 20 17:18:28.442: INFO: netserver-3 started at 2023-01-20 17:17:17 +0000 UTC (0+1 container statuses recorded) Jan 20 17:18:28.442: INFO: Container webserver ready: true, restart count 0 Jan 20 17:18:28.442: INFO: ebs-csi-node-74dsh started at 2023-01-20 17:07:54 +0000 UTC (0+3 container statuses recorded) Jan 20 17:18:28.442: INFO: Container ebs-plugin ready: true, restart count 1 Jan 20 17:18:28.442: INFO: Container liveness-probe ready: true, restart count 1 Jan 20 17:18:28.442: INFO: Container node-driver-registrar ready: true, restart count 1 Jan 20 17:18:28.442: INFO: indexed-job-0-smdmd started at 2023-01-20 17:17:10 +0000 UTC (0+1 container statuses recorded) Jan 20 17:18:28.442: INFO: Container c ready: false, restart count 0 Jan 20 17:18:28.442: INFO: coredns-autoscaler-7cb5c5b969-zvbqv started at 2023-01-20 17:17:40 +0000 UTC (0+1 container statuses recorded) Jan 20 17:18:28.442: INFO: Container autoscaler ready: true, restart count 0 Jan 20 17:18:28.442: INFO: webserver-74b5ffd748-g7bn8 started at 2023-01-20 17:18:27 +0000 UTC (0+1 container statuses recorded) Jan 20 17:18:28.442: INFO: Container httpd ready: false, restart count 0 Jan 20 17:18:28.442: INFO: test-recreate-deployment-cff6dc657-xtsjf started at 2023-01-20 17:17:22 +0000 UTC (0+1 container statuses recorded) Jan 20 17:18:28.442: INFO: Container httpd ready: true, restart count 0 Jan 20 17:18:28.442: INFO: service-proxy-disabled-jg82r started at 2023-01-20 17:17:52 +0000 UTC (0+1 container statuses recorded) Jan 20 17:18:28.442: INFO: Container service-proxy-disabled ready: true, restart count 0 Jan 20 17:18:28.442: INFO: pvc-volume-tester-v7khp started at 2023-01-20 17:13:41 +0000 UTC (0+1 container statuses recorded) Jan 20 17:18:28.442: INFO: Container volume-tester ready: false, restart count 0 Jan 20 17:18:28.442: INFO: test-pod-3 started at 2023-01-20 17:15:25 +0000 UTC (0+1 container statuses recorded) Jan 20 17:18:28.442: INFO: Container token-test ready: true, restart count 0 Jan 20 17:18:28.442: INFO: webserver-7f5969cbc7-rq49w started at 2023-01-20 17:18:26 +0000 UTC (0+1 container statuses recorded) Jan 20 17:18:28.442: INFO: Container httpd ready: true, restart count 0 Jan 20 17:18:28.442: INFO: csi-hostpathplugin-0 started at 2023-01-20 17:16:55 +0000 UTC (0+7 container statuses recorded) Jan 20 17:18:28.442: INFO: Container csi-attacher ready: true, restart count 0 Jan 20 17:18:28.442: INFO: Container csi-provisioner ready: true, restart count 0 Jan 20 17:18:28.442: INFO: Container csi-resizer ready: true, restart count 0 Jan 20 17:18:28.442: INFO: Container csi-snapshotter ready: true, restart count 0 Jan 20 17:18:28.442: INFO: Container hostpath ready: true, restart count 0 Jan 20 17:18:28.442: INFO: Container liveness-probe ready: true, restart count 0 Jan 20 17:18:28.442: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 20 17:18:28.442: INFO: oidc-discovery-validator started at 2023-01-20 17:17:13 +0000 UTC (0+1 container statuses recorded) Jan 20 17:18:28.442: INFO: Container oidc-discovery-validator ready: false, restart count 0 Jan 20 17:18:28.442: INFO: indexed-job-3-xm8cf started at 2023-01-20 17:17:18 +0000 UTC (0+1 container statuses recorded) Jan 20 17:18:28.442: INFO: Container c ready: false, restart count 0 Jan 20 17:18:28.442: INFO: kube-flannel-ds-d9rm4 started at 2023-01-20 17:07:54 +0000 UTC (2+1 container statuses recorded) Jan 20 17:18:28.442: INFO: Init container install-cni-plugin ready: true, restart count 1 Jan 20 17:18:28.442: INFO: Init container install-cni ready: true, restart count 0 Jan 20 17:18:28.442: INFO: Container kube-flannel ready: true, restart count 2 Jan 20 17:18:28.442: INFO: service-proxy-toggled-8j48l started at 2023-01-20 17:15:04 +0000 UTC (0+1 container statuses recorded) Jan 20 17:18:28.442: INFO: Container service-proxy-toggled ready: true, restart count 0 Jan 20 17:18:28.442: INFO: netserver-3 started at 2023-01-20 17:15:26 +0000 UTC (0+1 container statuses recorded) Jan 20 17:18:28.442: INFO: Container webserver ready: true, restart count 0 Jan 20 17:18:28.442: INFO: indexed-job-1-vvpsb started at 2023-01-20 17:17:10 +0000 UTC (0+1 container statuses recorded) Jan 20 17:18:28.442: INFO: Container c ready: false, restart count 0 Jan 20 17:18:28.442: INFO: kube-proxy-i-0f775d321e19704c3 started at 2023-01-20 17:07:34 +0000 UTC (0+1 container statuses recorded) Jan 20 17:18:28.442: INFO: Container kube-proxy ready: true, restart count 1 Jan 20 17:18:28.442: INFO: test-pod-2 started at 2023-01-20 17:15:25 +0000 UTC (0+1 container statuses recorded) Jan 20 17:18:28.442: INFO: Container token-test ready: true, restart count 0 Jan 20 17:18:28.442: INFO: externalname-service-fcg7p started at 2023-01-20 17:18:25 +0000 UTC (0+1 container statuses recorded) Jan 20 17:18:28.442: INFO: Container externalname-service ready: false, restart count 0 Jan 20 17:18:28.442: INFO: service-proxy-disabled-xwb98 started at 2023-01-20 17:14:55 +0000 UTC (0+1 container statuses recorded) Jan 20 17:18:28.442: INFO: Container service-proxy-disabled ready: true, restart count 0 Jan 20 17:18:28.442: INFO: webserver-7f5969cbc7-9p2pn started at 2023-01-20 17:18:25 +0000 UTC (0+1 container statuses recorded) Jan 20 17:18:28.442: INFO: Container httpd ready: true, restart count 0 Jan 20 17:18:28.442: INFO: indexed-job-2-v6ngc started at 2023-01-20 17:17:18 +0000 UTC (0+1 container statuses recorded) Jan 20 17:18:28.442: INFO: Container c ready: false, restart count 0 Jan 20 17:18:28.442: INFO: simpletest.rc-9xd2k started at 2023-01-20 17:15:26 +0000 UTC (0+1 container statuses recorded) Jan 20 17:18:28.442: INFO: Container nginx ready: true, restart count 0 Jan 20 17:18:28.442: INFO: pod-ephm-test-projected-fw6j started at 2023-01-20 17:17:15 +0000 UTC (0+1 container statuses recorded) Jan 20 17:18:28.442: INFO: Container test-container-subpath-projected-fw6j ready: false, restart count 0 Jan 20 17:18:28.635: INFO: Latency metrics for node i-0f775d321e19704c3 [DeferCleanup (Each)] [sig-network] Networking tear down framework | framework.go:193 �[1mSTEP:�[0m Destroying namespace "nettest-712" for this suite. �[38;5;243m01/20/23 17:18:28.635�[0m
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-network\]\sServices\sshould\simplement\sservice\.kubernetes\.io\/service\-proxy\-name$'
test/e2e/network/service.go:4058 k8s.io/kubernetes/test/e2e/network.launchHostExecPod({0x8022ee8, 0xc00374b040}, {0xc004360e10, 0xd}, {0x76695d3, 0x21}) test/e2e/network/service.go:4058 +0x1bd k8s.io/kubernetes/test/e2e/network.verifyServeHostnameServiceDown({0x8022ee8, 0xc00374b040}, {0xc004360e10, 0xd}, {0xc004421ae0, 0xd}, 0x3?) test/e2e/network/service.go:403 +0x8d k8s.io/kubernetes/test/e2e/network.glob..func26.31() test/e2e/network/service.go:2283 +0x499 There were additional failures detected after the initial failure: [FAILED] Jan 20 17:15:48.645: failed to list events in namespace "services-2528": Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/services-2528/events": dial tcp 100.26.139.144:443: connect: connection refused In [DeferCleanup (Each)] at: test/e2e/framework/debug/dump.go:44 ---------- [FAILED] Jan 20 17:15:48.686: Couldn't delete ns: "services-2528": Delete "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/services-2528": dial tcp 100.26.139.144:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/services-2528", Err:(*net.OpError)(0xc002e22eb0)}) In [DeferCleanup (Each)] at: test/e2e/framework/framework.go:370from junit_01.xml
[BeforeEach] [sig-network] Services set up framework | framework.go:178 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/20/23 17:14:54.61�[0m Jan 20 17:14:54.610: INFO: >>> kubeConfig: /root/.kube/config �[1mSTEP:�[0m Building a namespace api object, basename services �[38;5;243m01/20/23 17:14:54.611�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/20/23 17:14:54.987�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/20/23 17:14:55.07�[0m [BeforeEach] [sig-network] Services test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-network] Services test/e2e/network/service.go:766 [It] should implement service.kubernetes.io/service-proxy-name test/e2e/network/service.go:2256 �[1mSTEP:�[0m creating service-disabled in namespace services-2528 �[38;5;243m01/20/23 17:14:55.2�[0m �[1mSTEP:�[0m creating service service-proxy-disabled in namespace services-2528 �[38;5;243m01/20/23 17:14:55.201�[0m �[1mSTEP:�[0m creating replication controller service-proxy-disabled in namespace services-2528 �[38;5;243m01/20/23 17:14:55.317�[0m I0120 17:14:55.436988 6789 runners.go:193] Created replication controller with name: service-proxy-disabled, namespace: services-2528, replica count: 3 I0120 17:14:58.488389 6789 runners.go:193] service-proxy-disabled Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0120 17:15:01.489204 6789 runners.go:193] service-proxy-disabled Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0120 17:15:04.489411 6789 runners.go:193] service-proxy-disabled Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady �[1mSTEP:�[0m creating service in namespace services-2528 �[38;5;243m01/20/23 17:15:04.526�[0m �[1mSTEP:�[0m creating service service-proxy-toggled in namespace services-2528 �[38;5;243m01/20/23 17:15:04.526�[0m �[1mSTEP:�[0m creating replication controller service-proxy-toggled in namespace services-2528 �[38;5;243m01/20/23 17:15:04.576�[0m I0120 17:15:04.629668 6789 runners.go:193] Created replication controller with name: service-proxy-toggled, namespace: services-2528, replica count: 3 I0120 17:15:07.681038 6789 runners.go:193] service-proxy-toggled Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0120 17:15:10.681889 6789 runners.go:193] service-proxy-toggled Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady �[1mSTEP:�[0m verifying service is up �[38;5;243m01/20/23 17:15:10.714�[0m Jan 20 17:15:10.714: INFO: Creating new host exec pod Jan 20 17:15:10.751: INFO: Waiting up to 5m0s for pod "verify-service-up-host-exec-pod" in namespace "services-2528" to be "running and ready" Jan 20 17:15:10.783: INFO: Pod "verify-service-up-host-exec-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 31.101874ms Jan 20 17:15:10.783: INFO: The phase of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true) Jan 20 17:15:12.820: INFO: Pod "verify-service-up-host-exec-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 2.068672117s Jan 20 17:15:12.820: INFO: The phase of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true) Jan 20 17:15:14.820: INFO: Pod "verify-service-up-host-exec-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 4.068942201s Jan 20 17:15:14.820: INFO: The phase of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true) Jan 20 17:15:16.814: INFO: Pod "verify-service-up-host-exec-pod": Phase="Running", Reason="", readiness=true. Elapsed: 6.062963112s Jan 20 17:15:16.814: INFO: The phase of Pod verify-service-up-host-exec-pod is Running (Ready = true) Jan 20 17:15:16.814: INFO: Pod "verify-service-up-host-exec-pod" satisfied condition "running and ready" Jan 20 17:15:16.814: INFO: Creating new exec pod Jan 20 17:15:16.849: INFO: Waiting up to 5m0s for pod "verify-service-up-exec-pod-vrzxq" in namespace "services-2528" to be "running" Jan 20 17:15:16.880: INFO: Pod "verify-service-up-exec-pod-vrzxq": Phase="Pending", Reason="", readiness=false. Elapsed: 31.143341ms Jan 20 17:15:18.913: INFO: Pod "verify-service-up-exec-pod-vrzxq": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064450079s Jan 20 17:15:20.912: INFO: Pod "verify-service-up-exec-pod-vrzxq": Phase="Running", Reason="", readiness=true. Elapsed: 4.062946261s Jan 20 17:15:20.912: INFO: Pod "verify-service-up-exec-pod-vrzxq" satisfied condition "running" �[1mSTEP:�[0m verifying service has 3 reachable backends �[38;5;243m01/20/23 17:15:20.912�[0m Jan 20 17:15:20.912: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -O - -T 1 http://100.67.240.180:80 2>&1 || true; echo; done" in pod services-2528/verify-service-up-host-exec-pod Jan 20 17:15:20.912: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/36067b0b-98e4-11ed-824d-f64c9135b4ea/kubectl --server=https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-2528 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -O - -T 1 http://100.67.240.180:80 2>&1 || true; echo; done' Jan 20 17:15:21.682: INFO: stderr: "+ seq 1 150\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n" Jan 20 17:15:21.683: INFO: stdout: "service-proxy-toggled-zghmz\nservice-proxy-toggled-zghmz\nservice-proxy-toggled-bvmzm\nservice-proxy-toggled-zghmz\nservice-proxy-toggled-zghmz\nservice-proxy-toggled-8j48l\nservice-proxy-toggled-bvmzm\nservice-proxy-toggled-zghmz\nservice-proxy-toggled-zghmz\nservice-proxy-toggled-zghmz\nservice-proxy-toggled-8j48l\nservice-proxy-toggled-zghmz\nservice-proxy-toggled-zghmz\nservice-proxy-toggled-bvmzm\nservice-proxy-toggled-bvmzm\nservice-proxy-toggled-zghmz\nservice-proxy-toggled-bvmzm\nservice-proxy-toggled-bvmzm\nservice-proxy-toggled-zghmz\nservice-proxy-toggled-zghmz\nservice-proxy-toggled-bvmzm\nservice-proxy-toggled-bvmzm\nservice-proxy-toggled-8j48l\nservice-proxy-toggled-zghmz\nservice-proxy-toggled-bvmzm\nservice-proxy-toggled-zghmz\nservice-proxy-toggled-8j48l\nservice-proxy-toggled-zghmz\nservice-proxy-toggled-zghmz\nservice-proxy-toggled-bvmzm\nservice-proxy-toggled-zghmz\nservice-proxy-toggled-bvmzm\nservice-proxy-toggled-zghmz\nservice-proxy-toggled-8j48l\nservice-proxy-toggled-8j48l\nservice-proxy-toggled-8j48l\nservice-proxy-toggled-zghmz\nservice-proxy-toggled-zghmz\nservice-proxy-toggled-zghmz\nservice-proxy-toggled-8j48l\nservice-proxy-toggled-zghmz\nservice-proxy-toggled-8j48l\nservice-proxy-toggled-bvmzm\nservice-proxy-toggled-zghmz\nservice-proxy-toggled-zghmz\nservice-proxy-toggled-8j48l\nservice-proxy-toggled-bvmzm\nservice-proxy-toggled-8j48l\nservice-proxy-toggled-bvmzm\nservice-proxy-toggled-zghmz\nservice-proxy-toggled-8j48l\nservice-proxy-toggled-8j48l\nservice-proxy-toggled-8j48l\nservice-proxy-toggled-bvmzm\nservice-proxy-toggled-8j48l\nservice-proxy-toggled-bvmzm\nservice-proxy-toggled-zghmz\nservice-proxy-toggled-8j48l\nservice-proxy-toggled-bvmzm\nservice-proxy-toggled-bvmzm\nservice-proxy-toggled-8j48l\nservice-proxy-toggled-zghmz\nservice-proxy-toggled-bvmzm\nservice-proxy-toggled-8j48l\nservice-proxy-toggled-8j48l\nservice-proxy-toggled-bvmzm\nservice-proxy-toggled-8j48l\nservice-proxy-toggled-bvmzm\nservice-proxy-toggled-bvmzm\nservice-proxy-toggled-bvmzm\nservice-proxy-toggled-zghmz\nservice-proxy-toggled-bvmzm\nservice-proxy-toggled-bvmzm\nservice-proxy-toggled-8j48l\nservice-proxy-toggled-8j48l\nservice-proxy-toggled-bvmzm\nservice-proxy-toggled-zghmz\nservice-proxy-toggled-zghmz\nservice-proxy-toggled-zghmz\nservice-proxy-toggled-8j48l\nservice-proxy-toggled-8j48l\nservice-proxy-toggled-8j48l\nservice-proxy-toggled-8j48l\nservice-proxy-toggled-zghmz\nservice-proxy-toggled-bvmzm\nservice-proxy-toggled-bvmzm\nservice-proxy-toggled-8j48l\nservice-proxy-toggled-8j48l\nservice-proxy-toggled-zghmz\nservice-proxy-toggled-8j48l\nservice-proxy-toggled-zghmz\nservice-proxy-toggled-bvmzm\nservice-proxy-toggled-zghmz\nservice-proxy-toggled-zghmz\nservice-proxy-toggled-8j48l\nservice-proxy-toggled-8j48l\nservice-proxy-toggled-zghmz\nservice-proxy-toggled-bvmzm\nservice-proxy-toggled-8j48l\nservice-proxy-toggled-bvmzm\nservice-proxy-toggled-bvmzm\nservice-proxy-toggled-8j48l\nservice-proxy-toggled-bvmzm\nservice-proxy-toggled-zghmz\nservice-proxy-toggled-zghmz\nservice-proxy-toggled-zghmz\nservice-proxy-toggled-bvmzm\nservice-proxy-toggled-8j48l\nservice-proxy-toggled-8j48l\nservice-proxy-toggled-zghmz\nservice-proxy-toggled-zghmz\nservice-proxy-toggled-bvmzm\nservice-proxy-toggled-zghmz\nservice-proxy-toggled-8j48l\nservice-proxy-toggled-zghmz\nservice-proxy-toggled-zghmz\nservice-proxy-toggled-zghmz\nservice-proxy-toggled-8j48l\nservice-proxy-toggled-bvmzm\nservice-proxy-toggled-zghmz\nservice-proxy-toggled-8j48l\nservice-proxy-toggled-bvmzm\nservice-proxy-toggled-zghmz\nservice-proxy-toggled-zghmz\nservice-proxy-toggled-zghmz\nservice-proxy-toggled-bvmzm\nservice-proxy-toggled-bvmzm\nservice-proxy-toggled-zghmz\nservice-proxy-toggled-8j48l\nservice-proxy-toggled-8j48l\nservice-proxy-toggled-bvmzm\nservice-proxy-toggled-zghmz\nservice-proxy-toggled-bvmzm\nservice-proxy-toggled-8j48l\nservice-proxy-toggled-8j48l\nservice-proxy-toggled-bvmzm\nservice-proxy-toggled-8j48l\nservice-proxy-toggled-zghmz\nservice-proxy-toggled-zghmz\nservice-proxy-toggled-bvmzm\nservice-proxy-toggled-bvmzm\nservice-proxy-toggled-zghmz\nservice-proxy-toggled-8j48l\nservice-proxy-toggled-bvmzm\nservice-proxy-toggled-zghmz\nservice-proxy-toggled-zghmz\nservice-proxy-toggled-8j48l\nservice-proxy-toggled-8j48l\nservice-proxy-toggled-zghmz\nservice-proxy-toggled-bvmzm\n" Jan 20 17:15:21.683: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -O - -T 1 http://100.67.240.180:80 2>&1 || true; echo; done" in pod services-2528/verify-service-up-exec-pod-vrzxq Jan 20 17:15:21.683: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/36067b0b-98e4-11ed-824d-f64c9135b4ea/kubectl --server=https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-2528 exec verify-service-up-exec-pod-vrzxq -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -O - -T 1 http://100.67.240.180:80 2>&1 || true; echo; done' Jan 20 17:15:22.787: INFO: stderr: "+ seq 1 150\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n+ wget -q -O - -T 1 http://100.67.240.180:80\n+ echo\n" Jan 20 17:15:22.787: INFO: stdout: "service-proxy-toggled-8j48l\nservice-proxy-toggled-zghmz\nservice-proxy-toggled-8j48l\nservice-proxy-toggled-zghmz\nservice-proxy-toggled-bvmzm\nservice-proxy-toggled-bvmzm\nservice-proxy-toggled-zghmz\nservice-proxy-toggled-8j48l\nservice-proxy-toggled-8j48l\nservice-proxy-toggled-zghmz\nservice-proxy-toggled-bvmzm\nservice-proxy-toggled-bvmzm\nservice-proxy-toggled-bvmzm\nservice-proxy-toggled-zghmz\nservice-proxy-toggled-bvmzm\nservice-proxy-toggled-8j48l\nservice-proxy-toggled-8j48l\nservice-proxy-toggled-bvmzm\nservice-proxy-toggled-bvmzm\nservice-proxy-toggled-zghmz\nservice-proxy-toggled-8j48l\nservice-proxy-toggled-zghmz\nservice-proxy-toggled-zghmz\nservice-proxy-toggled-zghmz\nservice-proxy-toggled-8j48l\nservice-proxy-toggled-8j48l\nservice-proxy-toggled-bvmzm\nservice-proxy-toggled-bvmzm\nservice-proxy-toggled-zghmz\nservice-proxy-toggled-bvmzm\nservice-proxy-toggled-zghmz\nservice-proxy-toggled-zghmz\nservice-proxy-toggled-zghmz\nservice-proxy-toggled-bvmzm\nservice-proxy-toggled-8j48l\nservice-proxy-toggled-8j48l\nservice-proxy-toggled-bvmzm\nservice-proxy-toggled-bvmzm\nservice-proxy-toggled-zghmz\nservice-proxy-toggled-bvmzm\nservice-proxy-toggled-bvmzm\nservice-proxy-toggled-8j48l\nservice-proxy-toggled-bvmzm\nservice-proxy-toggled-zghmz\nservice-proxy-toggled-8j48l\nservice-proxy-toggled-bvmzm\nservice-proxy-toggled-bvmzm\nservice-proxy-toggled-bvmzm\nservice-proxy-toggled-zghmz\nservice-proxy-toggled-zghmz\nservice-proxy-toggled-8j48l\nservice-proxy-toggled-8j48l\nservice-proxy-toggled-8j48l\nservice-proxy-toggled-8j48l\nservice-proxy-toggled-zghmz\nservice-proxy-toggled-zghmz\nservice-proxy-toggled-bvmzm\nservice-proxy-toggled-bvmzm\nservice-proxy-toggled-zghmz\nservice-proxy-toggled-8j48l\nservice-proxy-toggled-8j48l\nservice-proxy-toggled-8j48l\nservice-proxy-toggled-bvmzm\nservice-proxy-toggled-8j48l\nservice-proxy-toggled-8j48l\nservice-proxy-toggled-8j48l\nservice-proxy-toggled-bvmzm\nservice-proxy-toggled-bvmzm\nservice-proxy-toggled-8j48l\nservice-proxy-toggled-zghmz\nservice-proxy-toggled-bvmzm\nservice-proxy-toggled-bvmzm\nservice-proxy-toggled-zghmz\nservice-proxy-toggled-zghmz\nservice-proxy-toggled-bvmzm\nservice-proxy-toggled-zghmz\nservice-proxy-toggled-zghmz\nservice-proxy-toggled-8j48l\nservice-proxy-toggled-8j48l\nservice-proxy-toggled-zghmz\nservice-proxy-toggled-8j48l\nservice-proxy-toggled-8j48l\nservice-proxy-toggled-zghmz\nservice-proxy-toggled-bvmzm\nservice-proxy-toggled-bvmzm\nservice-proxy-toggled-zghmz\nservice-proxy-toggled-bvmzm\nservice-proxy-toggled-zghmz\nservice-proxy-toggled-bvmzm\nservice-proxy-toggled-8j48l\nservice-proxy-toggled-bvmzm\nservice-proxy-toggled-bvmzm\nservice-proxy-toggled-bvmzm\nservice-proxy-toggled-8j48l\nservice-proxy-toggled-bvmzm\nservice-proxy-toggled-8j48l\nservice-proxy-toggled-8j48l\nservice-proxy-toggled-8j48l\nservice-proxy-toggled-zghmz\nservice-proxy-toggled-bvmzm\nservice-proxy-toggled-bvmzm\nservice-proxy-toggled-8j48l\nservice-proxy-toggled-zghmz\nservice-proxy-toggled-8j48l\nservice-proxy-toggled-bvmzm\nservice-proxy-toggled-8j48l\nservice-proxy-toggled-bvmzm\nservice-proxy-toggled-zghmz\nservice-proxy-toggled-bvmzm\nservice-proxy-toggled-zghmz\nservice-proxy-toggled-zghmz\nservice-proxy-toggled-8j48l\nservice-proxy-toggled-8j48l\nservice-proxy-toggled-zghmz\nservice-proxy-toggled-zghmz\nservice-proxy-toggled-zghmz\nservice-proxy-toggled-bvmzm\nservice-proxy-toggled-8j48l\nservice-proxy-toggled-8j48l\nservice-proxy-toggled-bvmzm\nservice-proxy-toggled-8j48l\nservice-proxy-toggled-bvmzm\nservice-proxy-toggled-zghmz\nservice-proxy-toggled-zghmz\nservice-proxy-toggled-bvmzm\nservice-proxy-toggled-zghmz\nservice-proxy-toggled-8j48l\nservice-proxy-toggled-8j48l\nservice-proxy-toggled-8j48l\nservice-proxy-toggled-bvmzm\nservice-proxy-toggled-8j48l\nservice-proxy-toggled-bvmzm\nservice-proxy-toggled-8j48l\nservice-proxy-toggled-8j48l\nservice-proxy-toggled-bvmzm\nservice-proxy-toggled-8j48l\nservice-proxy-toggled-zghmz\nservice-proxy-toggled-8j48l\nservice-proxy-toggled-bvmzm\nservice-proxy-toggled-8j48l\nservice-proxy-toggled-zghmz\nservice-proxy-toggled-zghmz\nservice-proxy-toggled-8j48l\nservice-proxy-toggled-bvmzm\nservice-proxy-toggled-zghmz\nservice-proxy-toggled-zghmz\nservice-proxy-toggled-zghmz\nservice-proxy-toggled-zghmz\nservice-proxy-toggled-zghmz\nservice-proxy-toggled-8j48l\n" �[1mSTEP:�[0m Deleting pod verify-service-up-host-exec-pod in namespace services-2528 �[38;5;243m01/20/23 17:15:22.788�[0m �[1mSTEP:�[0m Deleting pod verify-service-up-exec-pod-vrzxq in namespace services-2528 �[38;5;243m01/20/23 17:15:22.828�[0m �[1mSTEP:�[0m verifying service-disabled is not up �[38;5;243m01/20/23 17:15:22.869�[0m Jan 20 17:15:22.869: INFO: Creating new host exec pod Jan 20 17:15:22.905: INFO: Waiting up to 5m0s for pod "verify-service-down-host-exec-pod" in namespace "services-2528" to be "running and ready" Jan 20 17:15:22.937: INFO: Pod "verify-service-down-host-exec-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 31.950018ms Jan 20 17:15:22.937: INFO: The phase of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true) Jan 20 17:15:24.968: INFO: Pod "verify-service-down-host-exec-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062948208s Jan 20 17:15:24.968: INFO: The phase of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true) Jan 20 17:15:26.976: INFO: Pod "verify-service-down-host-exec-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 4.071314966s Jan 20 17:15:26.976: INFO: The phase of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true) Jan 20 17:15:48.561: INFO: Encountered non-retryable error while getting pod services-2528/verify-service-down-host-exec-pod: Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/services-2528/pods/verify-service-down-host-exec-pod": dial tcp 100.26.139.144:443: connect: connection refused - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=551, ErrCode=NO_ERROR, debug="" Jan 20 17:15:48.561: INFO: Unexpected error: <*fmt.wrapError | 0xc004a1bfa0>: { msg: "error while waiting for pod services-2528/verify-service-down-host-exec-pod to be running and ready: Get \"https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/services-2528/pods/verify-service-down-host-exec-pod\": dial tcp 100.26.139.144:443: connect: connection refused - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=551, ErrCode=NO_ERROR, debug=\"\"", err: <*rest.wrapPreviousError | 0xc004a1bf80>{ currentErr: <*url.Error | 0xc00471c870>{ Op: "Get", URL: "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/services-2528/pods/verify-service-down-host-exec-pod", Err: <*net.OpError | 0xc004749f90>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc002b452f0>{ IP: [100, 26, 139, 144], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc004a1bf40>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, }, previousError: <http2.GoAwayError>{LastStreamID: 551, ErrCode: 0, DebugData: ""}, }, } Jan 20 17:15:48.561: FAIL: error while waiting for pod services-2528/verify-service-down-host-exec-pod to be running and ready: Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/services-2528/pods/verify-service-down-host-exec-pod": dial tcp 100.26.139.144:443: connect: connection refused - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=551, ErrCode=NO_ERROR, debug="" Full Stack Trace k8s.io/kubernetes/test/e2e/network.launchHostExecPod({0x8022ee8, 0xc00374b040}, {0xc004360e10, 0xd}, {0x76695d3, 0x21}) test/e2e/network/service.go:4058 +0x1bd k8s.io/kubernetes/test/e2e/network.verifyServeHostnameServiceDown({0x8022ee8, 0xc00374b040}, {0xc004360e10, 0xd}, {0xc004421ae0, 0xd}, 0x3?) test/e2e/network/service.go:403 +0x8d k8s.io/kubernetes/test/e2e/network.glob..func26.31() test/e2e/network/service.go:2283 +0x499 [AfterEach] [sig-network] Services test/e2e/framework/node/init/init.go:32 Jan 20 17:15:48.562: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-network] Services test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-network] Services dump namespaces | framework.go:196 �[1mSTEP:�[0m dump namespace information after failure �[38;5;243m01/20/23 17:15:48.601�[0m �[1mSTEP:�[0m Collecting events from namespace "services-2528". �[38;5;243m01/20/23 17:15:48.601�[0m Jan 20 17:15:48.645: INFO: Unexpected error: failed to list events in namespace "services-2528": <*url.Error | 0xc002e1ec00>: { Op: "Get", URL: "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/services-2528/events", Err: <*net.OpError | 0xc002e22a00>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc00471da70>{ IP: [100, 26, 139, 144], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc000c7ef00>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Jan 20 17:15:48.645: FAIL: failed to list events in namespace "services-2528": Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/services-2528/events": dial tcp 100.26.139.144:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework/debug.dumpEventsInNamespace(0xc002cd45c0, {0xc004360e10, 0xd}) test/e2e/framework/debug/dump.go:44 +0x191 k8s.io/kubernetes/test/e2e/framework/debug.DumpAllNamespaceInfo({0x8022ee8, 0xc00374b040}, {0xc004360e10, 0xd}) test/e2e/framework/debug/dump.go:62 +0x8d k8s.io/kubernetes/test/e2e/framework/debug/init.init.0.func1.1(0xc002cd4650?, {0xc004360e10?, 0x7fac780?}) test/e2e/framework/debug/init/init.go:34 +0x32 k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo.func1() test/e2e/framework/framework.go:274 +0x6d k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo(0xc000ebc5a0) test/e2e/framework/framework.go:271 +0x179 reflect.Value.call({0x662c060?, 0xc000116d60?, 0xc000c75fb0?}, {0x75bb752, 0x4}, {0xae7b420, 0x0, 0xc0001db268?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x662c060?, 0xc000116d60?, 0x2946afc?}, {0xae7b420?, 0xc000c75f80?, 0xc002cdda90?}) /usr/local/go/src/reflect/value.go:368 +0xbc [DeferCleanup (Each)] [sig-network] Services tear down framework | framework.go:193 �[1mSTEP:�[0m Destroying namespace "services-2528" for this suite. �[38;5;243m01/20/23 17:15:48.645�[0m Jan 20 17:15:48.686: FAIL: Couldn't delete ns: "services-2528": Delete "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/services-2528": dial tcp 100.26.139.144:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/services-2528", Err:(*net.OpError)(0xc002e22eb0)}) Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach.func1() test/e2e/framework/framework.go:370 +0x4fe k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc000ebc5a0) test/e2e/framework/framework.go:383 +0x1ca reflect.Value.call({0x662c060?, 0xc000116c60?, 0xc002ef1360?}, {0x75bb752, 0x4}, {0xae7b420, 0x0, 0x0?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x662c060?, 0xc000116c60?, 0x0?}, {0xae7b420?, 0x5?, 0xc002ef1360?}) /usr/local/go/src/reflect/value.go:368 +0xbc
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-node\]\sPods\sshould\sdelete\sa\scollection\sof\spods\s\[Conformance\]$'
test/e2e/common/node/pods.go:876 k8s.io/kubernetes/test/e2e/common/node.glob..func15.12() test/e2e/common/node/pods.go:876 +0x645 There were additional failures detected after the initial failure: [FAILED] Jan 20 17:15:48.645: failed to list events in namespace "pods-3771": Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/pods-3771/events": dial tcp 100.26.139.144:443: connect: connection refused In [DeferCleanup (Each)] at: test/e2e/framework/debug/dump.go:44 ---------- [FAILED] Jan 20 17:15:48.687: Couldn't delete ns: "pods-3771": Delete "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/pods-3771": dial tcp 100.26.139.144:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/pods-3771", Err:(*net.OpError)(0xc0027b0910)}) In [DeferCleanup (Each)] at: test/e2e/framework/framework.go:370from junit_01.xml
[BeforeEach] [sig-node] Pods set up framework | framework.go:178 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/20/23 17:15:25.554�[0m Jan 20 17:15:25.555: INFO: >>> kubeConfig: /root/.kube/config �[1mSTEP:�[0m Building a namespace api object, basename pods �[38;5;243m01/20/23 17:15:25.555�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/20/23 17:15:25.66�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/20/23 17:15:25.718�[0m [BeforeEach] [sig-node] Pods test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-node] Pods test/e2e/common/node/pods.go:194 [It] should delete a collection of pods [Conformance] test/e2e/common/node/pods.go:845 �[1mSTEP:�[0m Create set of pods �[38;5;243m01/20/23 17:15:25.778�[0m Jan 20 17:15:25.818: INFO: created test-pod-1 Jan 20 17:15:25.856: INFO: created test-pod-2 Jan 20 17:15:25.891: INFO: created test-pod-3 �[1mSTEP:�[0m waiting for all 3 pods to be running �[38;5;243m01/20/23 17:15:25.891�[0m Jan 20 17:15:25.891: INFO: Waiting up to 5m0s for all pods (need at least 3) in namespace 'pods-3771' to be running and ready Jan 20 17:15:25.995: INFO: The status of Pod test-pod-1 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 20 17:15:25.995: INFO: The status of Pod test-pod-2 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 20 17:15:25.995: INFO: The status of Pod test-pod-3 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 20 17:15:25.995: INFO: 0 / 3 pods in namespace 'pods-3771' are running and ready (0 seconds elapsed) Jan 20 17:15:25.995: INFO: expected 0 pod replicas in namespace 'pods-3771', 0 are Running and Ready. Jan 20 17:15:25.995: INFO: POD NODE PHASE GRACE CONDITIONS Jan 20 17:15:25.995: INFO: test-pod-1 i-0460dbd3e490039bb Pending [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-20 17:15:25 +0000 UTC }] Jan 20 17:15:25.995: INFO: test-pod-2 i-0f775d321e19704c3 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-20 17:15:25 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-20 17:15:25 +0000 UTC ContainersNotReady containers with unready status: [token-test]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-20 17:15:25 +0000 UTC ContainersNotReady containers with unready status: [token-test]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-20 17:15:25 +0000 UTC }] Jan 20 17:15:25.995: INFO: test-pod-3 i-0f775d321e19704c3 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-20 17:15:25 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-20 17:15:25 +0000 UTC ContainersNotReady containers with unready status: [token-test]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-20 17:15:25 +0000 UTC ContainersNotReady containers with unready status: [token-test]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-20 17:15:25 +0000 UTC }] Jan 20 17:15:25.995: INFO: Jan 20 17:15:48.560: INFO: Encountered non-retryable error while listing replication controllers in namespace pods-3771: Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/pods-3771/replicationcontrollers": dial tcp 100.26.139.144:443: connect: connection refused - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=219, ErrCode=NO_ERROR, debug="" Jan 20 17:15:48.561: INFO: Unexpected error: 3 pods not found running.: <*fmt.wrapError | 0xc0004c86e0>: { msg: "3 / 3 pods in namespace pods-3771 are NOT in RUNNING and READY state in 5m0s\nPOD NODE PHASE GRACE CONDITIONS\ntest-pod-1 i-0460dbd3e490039bb Pending [{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-20 17:15:25 +0000 UTC Reason: Message:}]\ntest-pod-2 i-0f775d321e19704c3 Pending [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-20 17:15:25 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-20 17:15:25 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [token-test]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-20 17:15:25 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [token-test]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-20 17:15:25 +0000 UTC Reason: Message:}]\ntest-pod-3 i-0f775d321e19704c3 Pending [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-20 17:15:25 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-20 17:15:25 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [token-test]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-20 17:15:25 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [token-test]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-20 17:15:25 +0000 UTC Reason: Message:}]\n\nLast error: Get \"https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/pods-3771/replicationcontrollers\": dial tcp 100.26.139.144:443: connect: connection refused - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=219, ErrCode=NO_ERROR, debug=\"\"", err: <*rest.wrapPreviousError | 0xc0004c86c0>{ currentErr: <*url.Error | 0xc004ed6cc0>{ Op: "Get", URL: "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/pods-3771/replicationcontrollers", Err: <*net.OpError | 0xc002a3d720>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc005031170>{ IP: [100, 26, 139, 144], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc0004c8620>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, }, previousError: <http2.GoAwayError>{LastStreamID: 219, ErrCode: 0, DebugData: ""}, }, } Jan 20 17:15:48.561: FAIL: 3 pods not found running.: 3 / 3 pods in namespace pods-3771 are NOT in RUNNING and READY state in 5m0s POD NODE PHASE GRACE CONDITIONS test-pod-1 i-0460dbd3e490039bb Pending [{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-20 17:15:25 +0000 UTC Reason: Message:}] test-pod-2 i-0f775d321e19704c3 Pending [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-20 17:15:25 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-20 17:15:25 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [token-test]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-20 17:15:25 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [token-test]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-20 17:15:25 +0000 UTC Reason: Message:}] test-pod-3 i-0f775d321e19704c3 Pending [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-20 17:15:25 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-20 17:15:25 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [token-test]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-20 17:15:25 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [token-test]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-20 17:15:25 +0000 UTC Reason: Message:}] Last error: Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/pods-3771/replicationcontrollers": dial tcp 100.26.139.144:443: connect: connection refused - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=219, ErrCode=NO_ERROR, debug="" Full Stack Trace k8s.io/kubernetes/test/e2e/common/node.glob..func15.12() test/e2e/common/node/pods.go:876 +0x645 [AfterEach] [sig-node] Pods test/e2e/framework/node/init/init.go:32 Jan 20 17:15:48.561: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-node] Pods test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-node] Pods dump namespaces | framework.go:196 �[1mSTEP:�[0m dump namespace information after failure �[38;5;243m01/20/23 17:15:48.607�[0m �[1mSTEP:�[0m Collecting events from namespace "pods-3771". �[38;5;243m01/20/23 17:15:48.608�[0m Jan 20 17:15:48.645: INFO: Unexpected error: failed to list events in namespace "pods-3771": <*url.Error | 0xc004f77410>: { Op: "Get", URL: "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/pods-3771/events", Err: <*net.OpError | 0xc002bc0b90>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc004ee4690>{ IP: [100, 26, 139, 144], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc0016907c0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Jan 20 17:15:48.645: FAIL: failed to list events in namespace "pods-3771": Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/pods-3771/events": dial tcp 100.26.139.144:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework/debug.dumpEventsInNamespace(0xc004bde5c0, {0xc000ea12c0, 0x9}) test/e2e/framework/debug/dump.go:44 +0x191 k8s.io/kubernetes/test/e2e/framework/debug.DumpAllNamespaceInfo({0x8022ee8, 0xc004f16b60}, {0xc000ea12c0, 0x9}) test/e2e/framework/debug/dump.go:62 +0x8d k8s.io/kubernetes/test/e2e/framework/debug/init.init.0.func1.1(0xc004bde650?, {0xc000ea12c0?, 0x7fac780?}) test/e2e/framework/debug/init/init.go:34 +0x32 k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo.func1() test/e2e/framework/framework.go:274 +0x6d k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo(0xc0002f5ef0) test/e2e/framework/framework.go:271 +0x179 reflect.Value.call({0x662c060?, 0xc000a3f9a0?, 0xc0046adfb0?}, {0x75bb752, 0x4}, {0xae7b420, 0x0, 0xc0032ca568?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x662c060?, 0xc000a3f9a0?, 0x2946afc?}, {0xae7b420?, 0xc0046adf80?, 0x0?}) /usr/local/go/src/reflect/value.go:368 +0xbc [DeferCleanup (Each)] [sig-node] Pods tear down framework | framework.go:193 �[1mSTEP:�[0m Destroying namespace "pods-3771" for this suite. �[38;5;243m01/20/23 17:15:48.646�[0m Jan 20 17:15:48.687: FAIL: Couldn't delete ns: "pods-3771": Delete "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/pods-3771": dial tcp 100.26.139.144:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/pods-3771", Err:(*net.OpError)(0xc0027b0910)}) Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach.func1() test/e2e/framework/framework.go:370 +0x4fe k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc0002f5ef0) test/e2e/framework/framework.go:383 +0x1ca reflect.Value.call({0x662c060?, 0xc000a3f8c0?, 0x0?}, {0x75bb752, 0x4}, {0xae7b420, 0x0, 0x0?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x662c060?, 0xc000a3f8c0?, 0x0?}, {0xae7b420?, 0x0?, 0x0?}) /usr/local/go/src/reflect/value.go:368 +0xbc
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-node\]\sProbing\scontainer\sshould\s\*not\*\sbe\srestarted\sby\sliveness\sprobe\sbecause\sstartup\sprobe\sdelays\sit$'
test/e2e/common/node/container_probe.go:972 k8s.io/kubernetes/test/e2e/common/node.RunLivenessTest(0xc0002611d0, 0xc0006b5b00, 0x0, 0xc00495fcb0?) test/e2e/common/node/container_probe.go:972 +0x96b k8s.io/kubernetes/test/e2e/common/node.glob..func2.16() test/e2e/common/node/container_probe.go:371 +0x1be There were additional failures detected after the initial failure: [FAILED] Jan 20 17:15:48.651: failed to list events in namespace "container-probe-7743": Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/container-probe-7743/events": dial tcp 100.26.139.144:443: connect: connection refused In [DeferCleanup (Each)] at: test/e2e/framework/debug/dump.go:44 ---------- [FAILED] Jan 20 17:15:48.687: Couldn't delete ns: "container-probe-7743": Delete "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/container-probe-7743": dial tcp 100.26.139.144:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/container-probe-7743", Err:(*net.OpError)(0xc004a97040)}) In [DeferCleanup (Each)] at: test/e2e/framework/framework.go:370from junit_01.xml
[BeforeEach] [sig-node] Probing container set up framework | framework.go:178 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/20/23 17:14:57.179�[0m Jan 20 17:14:57.179: INFO: >>> kubeConfig: /root/.kube/config �[1mSTEP:�[0m Building a namespace api object, basename container-probe �[38;5;243m01/20/23 17:14:57.18�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/20/23 17:14:57.277�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/20/23 17:14:57.339�[0m [BeforeEach] [sig-node] Probing container test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-node] Probing container test/e2e/common/node/container_probe.go:63 [It] should *not* be restarted by liveness probe because startup probe delays it test/e2e/common/node/container_probe.go:350 �[1mSTEP:�[0m Creating pod startup-04b7934a-c3e8-415c-ba2f-32e3d709e2f1 in namespace container-probe-7743 �[38;5;243m01/20/23 17:14:57.403�[0m Jan 20 17:14:57.439: INFO: Waiting up to 5m0s for pod "startup-04b7934a-c3e8-415c-ba2f-32e3d709e2f1" in namespace "container-probe-7743" to be "not pending" Jan 20 17:14:57.479: INFO: Pod "startup-04b7934a-c3e8-415c-ba2f-32e3d709e2f1": Phase="Pending", Reason="", readiness=false. Elapsed: 40.520308ms Jan 20 17:14:59.512: INFO: Pod "startup-04b7934a-c3e8-415c-ba2f-32e3d709e2f1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.072914049s Jan 20 17:15:01.525: INFO: Pod "startup-04b7934a-c3e8-415c-ba2f-32e3d709e2f1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.086248462s Jan 20 17:15:03.511: INFO: Pod "startup-04b7934a-c3e8-415c-ba2f-32e3d709e2f1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.072890212s Jan 20 17:15:05.541: INFO: Pod "startup-04b7934a-c3e8-415c-ba2f-32e3d709e2f1": Phase="Running", Reason="", readiness=false. Elapsed: 8.102255386s Jan 20 17:15:05.541: INFO: Pod "startup-04b7934a-c3e8-415c-ba2f-32e3d709e2f1" satisfied condition "not pending" Jan 20 17:15:05.541: INFO: Started pod startup-04b7934a-c3e8-415c-ba2f-32e3d709e2f1 in namespace container-probe-7743 �[1mSTEP:�[0m checking the pod's current state and verifying that restartCount is present �[38;5;243m01/20/23 17:15:05.541�[0m Jan 20 17:15:05.583: INFO: Initial restart count of pod startup-04b7934a-c3e8-415c-ba2f-32e3d709e2f1 is 0 Jan 20 17:15:48.564: INFO: Unexpected error: getting pod : <*rest.wrapPreviousError | 0xc00221e220>: { currentErr: <*url.Error | 0xc002276d50>{ Op: "Get", URL: "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/container-probe-7743/pods/startup-04b7934a-c3e8-415c-ba2f-32e3d709e2f1", Err: <*net.OpError | 0xc004a96b40>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc004aa10e0>{ IP: [100, 26, 139, 144], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc00221e1e0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, }, previousError: <http2.GoAwayError>{LastStreamID: 235, ErrCode: 0, DebugData: ""}, } Jan 20 17:15:48.565: FAIL: getting pod : Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/container-probe-7743/pods/startup-04b7934a-c3e8-415c-ba2f-32e3d709e2f1": dial tcp 100.26.139.144:443: connect: connection refused - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=235, ErrCode=NO_ERROR, debug="" Full Stack Trace k8s.io/kubernetes/test/e2e/common/node.RunLivenessTest(0xc0002611d0, 0xc0006b5b00, 0x0, 0xc00495fcb0?) test/e2e/common/node/container_probe.go:972 +0x96b k8s.io/kubernetes/test/e2e/common/node.glob..func2.16() test/e2e/common/node/container_probe.go:371 +0x1be �[1mSTEP:�[0m deleting the pod �[38;5;243m01/20/23 17:15:48.565�[0m [AfterEach] [sig-node] Probing container test/e2e/framework/node/init/init.go:32 Jan 20 17:15:48.565: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-node] Probing container test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-node] Probing container dump namespaces | framework.go:196 �[1mSTEP:�[0m dump namespace information after failure �[38;5;243m01/20/23 17:15:48.608�[0m �[1mSTEP:�[0m Collecting events from namespace "container-probe-7743". �[38;5;243m01/20/23 17:15:48.608�[0m Jan 20 17:15:48.650: INFO: Unexpected error: failed to list events in namespace "container-probe-7743": <*url.Error | 0xc003960210>: { Op: "Get", URL: "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/container-probe-7743/events", Err: <*net.OpError | 0xc0039580a0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc004aa1ef0>{ IP: [100, 26, 139, 144], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc0033aa000>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Jan 20 17:15:48.651: FAIL: failed to list events in namespace "container-probe-7743": Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/container-probe-7743/events": dial tcp 100.26.139.144:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework/debug.dumpEventsInNamespace(0xc0005ba5c0, {0xc0049b2378, 0x14}) test/e2e/framework/debug/dump.go:44 +0x191 k8s.io/kubernetes/test/e2e/framework/debug.DumpAllNamespaceInfo({0x8022ee8, 0xc000c97a00}, {0xc0049b2378, 0x14}) test/e2e/framework/debug/dump.go:62 +0x8d k8s.io/kubernetes/test/e2e/framework/debug/init.init.0.func1.1(0xc0005ba650?, {0xc0049b2378?, 0x7fac780?}) test/e2e/framework/debug/init/init.go:34 +0x32 k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo.func1() test/e2e/framework/framework.go:274 +0x6d k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo(0xc0002611d0) test/e2e/framework/framework.go:271 +0x179 reflect.Value.call({0x662c060?, 0xc0013b49f0?, 0xc002e3efb0?}, {0x75bb752, 0x4}, {0xae7b420, 0x0, 0xc0037d6a48?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x662c060?, 0xc0013b49f0?, 0x2946afc?}, {0xae7b420?, 0xc002e3ef80?, 0x1?}) /usr/local/go/src/reflect/value.go:368 +0xbc [DeferCleanup (Each)] [sig-node] Probing container tear down framework | framework.go:193 �[1mSTEP:�[0m Destroying namespace "container-probe-7743" for this suite. �[38;5;243m01/20/23 17:15:48.651�[0m Jan 20 17:15:48.687: FAIL: Couldn't delete ns: "container-probe-7743": Delete "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/container-probe-7743": dial tcp 100.26.139.144:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/container-probe-7743", Err:(*net.OpError)(0xc004a97040)}) Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach.func1() test/e2e/framework/framework.go:370 +0x4fe k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc0002611d0) test/e2e/framework/framework.go:383 +0x1ca reflect.Value.call({0x662c060?, 0xc0013b4920?, 0xc00327a820?}, {0x75bb752, 0x4}, {0xae7b420, 0x0, 0x0?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x662c060?, 0xc0013b4920?, 0x0?}, {0xae7b420?, 0x5?, 0xc00327a820?}) /usr/local/go/src/reflect/value.go:368 +0xbc
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-node\]\sProbing\scontainer\sshould\s\*not\*\sbe\srestarted\swith\sa\sGRPC\sliveness\sprobe\s\[NodeConformance\]$'
test/e2e/common/node/container_probe.go:955 k8s.io/kubernetes/test/e2e/common/node.RunLivenessTest(0xc000b662d0, 0xc000b0a000, 0x0, 0x4?) test/e2e/common/node/container_probe.go:955 +0x39a k8s.io/kubernetes/test/e2e/common/node.glob..func2.21() test/e2e/common/node/container_probe.go:538 +0xbe There were additional failures detected after the initial failure: [FAILED] Jan 20 17:15:48.691: failed to list events in namespace "container-probe-6026": Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/container-probe-6026/events": dial tcp 100.26.139.144:443: connect: connection refused In [DeferCleanup (Each)] at: test/e2e/framework/debug/dump.go:44 ---------- [FAILED] Jan 20 17:15:48.734: Couldn't delete ns: "container-probe-6026": Delete "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/container-probe-6026": dial tcp 100.26.139.144:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/container-probe-6026", Err:(*net.OpError)(0xc00483e550)}) In [DeferCleanup (Each)] at: test/e2e/framework/framework.go:370from junit_01.xml
[BeforeEach] [sig-node] Probing container set up framework | framework.go:178 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/20/23 17:15:20.01�[0m Jan 20 17:15:20.010: INFO: >>> kubeConfig: /root/.kube/config �[1mSTEP:�[0m Building a namespace api object, basename container-probe �[38;5;243m01/20/23 17:15:20.011�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/20/23 17:15:20.1�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/20/23 17:15:20.157�[0m [BeforeEach] [sig-node] Probing container test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-node] Probing container test/e2e/common/node/container_probe.go:63 [It] should *not* be restarted with a GRPC liveness probe [NodeConformance] test/e2e/common/node/container_probe.go:524 �[1mSTEP:�[0m Creating pod test-grpc-46eabcb2-0c4a-4520-810d-ba498e0fcbea in namespace container-probe-6026 �[38;5;243m01/20/23 17:15:20.216�[0m Jan 20 17:15:20.254: INFO: Waiting up to 5m0s for pod "test-grpc-46eabcb2-0c4a-4520-810d-ba498e0fcbea" in namespace "container-probe-6026" to be "not pending" Jan 20 17:15:20.283: INFO: Pod "test-grpc-46eabcb2-0c4a-4520-810d-ba498e0fcbea": Phase="Pending", Reason="", readiness=false. Elapsed: 29.929436ms Jan 20 17:15:22.324: INFO: Pod "test-grpc-46eabcb2-0c4a-4520-810d-ba498e0fcbea": Phase="Pending", Reason="", readiness=false. Elapsed: 2.070826179s Jan 20 17:15:24.314: INFO: Pod "test-grpc-46eabcb2-0c4a-4520-810d-ba498e0fcbea": Phase="Pending", Reason="", readiness=false. Elapsed: 4.060175044s Jan 20 17:15:26.314: INFO: Pod "test-grpc-46eabcb2-0c4a-4520-810d-ba498e0fcbea": Phase="Pending", Reason="", readiness=false. Elapsed: 6.060846708s Jan 20 17:15:48.562: INFO: Encountered non-retryable error while getting pod container-probe-6026/test-grpc-46eabcb2-0c4a-4520-810d-ba498e0fcbea: Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/container-probe-6026/pods/test-grpc-46eabcb2-0c4a-4520-810d-ba498e0fcbea": dial tcp 100.26.139.144:443: connect: connection refused - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=315, ErrCode=NO_ERROR, debug="" Jan 20 17:15:48.563: INFO: Unexpected error: starting pod test-grpc-46eabcb2-0c4a-4520-810d-ba498e0fcbea in namespace container-probe-6026: <*fmt.wrapError | 0xc003d344a0>: { msg: "error while waiting for pod container-probe-6026/test-grpc-46eabcb2-0c4a-4520-810d-ba498e0fcbea to be not pending: Get \"https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/container-probe-6026/pods/test-grpc-46eabcb2-0c4a-4520-810d-ba498e0fcbea\": dial tcp 100.26.139.144:443: connect: connection refused - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=315, ErrCode=NO_ERROR, debug=\"\"", err: <*rest.wrapPreviousError | 0xc003d34480>{ currentErr: <*url.Error | 0xc004bd7590>{ Op: "Get", URL: "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/container-probe-6026/pods/test-grpc-46eabcb2-0c4a-4520-810d-ba498e0fcbea", Err: <*net.OpError | 0xc0049a8780>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc004b19290>{ IP: [100, 26, 139, 144], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc003d34440>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, }, previousError: <http2.GoAwayError>{LastStreamID: 315, ErrCode: 0, DebugData: ""}, }, } Jan 20 17:15:48.563: FAIL: starting pod test-grpc-46eabcb2-0c4a-4520-810d-ba498e0fcbea in namespace container-probe-6026: error while waiting for pod container-probe-6026/test-grpc-46eabcb2-0c4a-4520-810d-ba498e0fcbea to be not pending: Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/container-probe-6026/pods/test-grpc-46eabcb2-0c4a-4520-810d-ba498e0fcbea": dial tcp 100.26.139.144:443: connect: connection refused - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=315, ErrCode=NO_ERROR, debug="" Full Stack Trace k8s.io/kubernetes/test/e2e/common/node.RunLivenessTest(0xc000b662d0, 0xc000b0a000, 0x0, 0x4?) test/e2e/common/node/container_probe.go:955 +0x39a k8s.io/kubernetes/test/e2e/common/node.glob..func2.21() test/e2e/common/node/container_probe.go:538 +0xbe �[1mSTEP:�[0m deleting the pod �[38;5;243m01/20/23 17:15:48.563�[0m [AfterEach] [sig-node] Probing container test/e2e/framework/node/init/init.go:32 Jan 20 17:15:48.607: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-node] Probing container test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-node] Probing container dump namespaces | framework.go:196 �[1mSTEP:�[0m dump namespace information after failure �[38;5;243m01/20/23 17:15:48.65�[0m �[1mSTEP:�[0m Collecting events from namespace "container-probe-6026". �[38;5;243m01/20/23 17:15:48.65�[0m Jan 20 17:15:48.691: INFO: Unexpected error: failed to list events in namespace "container-probe-6026": <*url.Error | 0xc004878f60>: { Op: "Get", URL: "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/container-probe-6026/events", Err: <*net.OpError | 0xc0049a9630>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc002614360>{ IP: [100, 26, 139, 144], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc003d349a0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Jan 20 17:15:48.691: FAIL: failed to list events in namespace "container-probe-6026": Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/container-probe-6026/events": dial tcp 100.26.139.144:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework/debug.dumpEventsInNamespace(0xc004a585c0, {0xc00399bf50, 0x14}) test/e2e/framework/debug/dump.go:44 +0x191 k8s.io/kubernetes/test/e2e/framework/debug.DumpAllNamespaceInfo({0x8022ee8, 0xc004bd2000}, {0xc00399bf50, 0x14}) test/e2e/framework/debug/dump.go:62 +0x8d k8s.io/kubernetes/test/e2e/framework/debug/init.init.0.func1.1(0xc004a58650?, {0xc00399bf50?, 0x7fac780?}) test/e2e/framework/debug/init/init.go:34 +0x32 k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo.func1() test/e2e/framework/framework.go:274 +0x6d k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo(0xc000b662d0) test/e2e/framework/framework.go:271 +0x179 reflect.Value.call({0x662c060?, 0xc001185ff0?, 0xc00065ffb0?}, {0x75bb752, 0x4}, {0xae7b420, 0x0, 0xc003ca4be8?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x662c060?, 0xc001185ff0?, 0x2946afc?}, {0xae7b420?, 0xc00065ff80?, 0xc00065ff70?}) /usr/local/go/src/reflect/value.go:368 +0xbc [DeferCleanup (Each)] [sig-node] Probing container tear down framework | framework.go:193 �[1mSTEP:�[0m Destroying namespace "container-probe-6026" for this suite. �[38;5;243m01/20/23 17:15:48.692�[0m Jan 20 17:15:48.734: FAIL: Couldn't delete ns: "container-probe-6026": Delete "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/container-probe-6026": dial tcp 100.26.139.144:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/container-probe-6026", Err:(*net.OpError)(0xc00483e550)}) Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach.func1() test/e2e/framework/framework.go:370 +0x4fe k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc000b662d0) test/e2e/framework/framework.go:383 +0x1ca reflect.Value.call({0x662c060?, 0xc001185f50?, 0x0?}, {0x75bb752, 0x4}, {0xae7b420, 0x0, 0xc0001ef6f0?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x662c060?, 0xc001185f50?, 0x7fe5ba8?}, {0xae7b420?, 0x100000000000000?, 0xc004b2a000?}) /usr/local/go/src/reflect/value.go:368 +0xbc
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-storage\]\sCSI\sVolumes\s\[Driver\:\scsi\-hostpath\]\s\[Testpattern\:\sDynamic\sPV\s\(block\svolmode\)\]\svolumeMode\sshould\snot\smount\s\/\smap\sunused\svolumes\sin\sa\spod\s\[LinuxOnly\]$'
test/e2e/storage/testsuites/volumemode.go:383 k8s.io/kubernetes/test/e2e/storage/testsuites.(*volumeModeTestSuite).DefineTests.func7.1() test/e2e/storage/testsuites/volumemode.go:383 +0x45 k8s.io/kubernetes/test/e2e/storage/testsuites.(*volumeModeTestSuite).DefineTests.func7() test/e2e/storage/testsuites/volumemode.go:416 +0x9e8from junit_01.xml
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode set up framework | framework.go:178 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/20/23 17:15:13.493�[0m Jan 20 17:15:13.493: INFO: >>> kubeConfig: /root/.kube/config �[1mSTEP:�[0m Building a namespace api object, basename volumemode �[38;5;243m01/20/23 17:15:13.494�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/20/23 17:15:13.582�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/20/23 17:15:13.638�[0m [BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode test/e2e/framework/metrics/init/init.go:31 [It] should not mount / map unused volumes in a pod [LinuxOnly] test/e2e/storage/testsuites/volumemode.go:354 �[1mSTEP:�[0m Building a driver namespace object, basename volumemode-4314 �[38;5;243m01/20/23 17:15:13.697�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/20/23 17:15:13.783�[0m �[1mSTEP:�[0m deploying csi-hostpath driver �[38;5;243m01/20/23 17:15:13.839�[0m Jan 20 17:15:13.969: INFO: creating *v1.ServiceAccount: volumemode-4314-8917/csi-attacher Jan 20 17:15:14.008: INFO: creating *v1.ClusterRole: external-attacher-runner-volumemode-4314 Jan 20 17:15:14.008: INFO: Define cluster role external-attacher-runner-volumemode-4314 Jan 20 17:15:14.045: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-volumemode-4314 Jan 20 17:15:14.089: INFO: creating *v1.Role: volumemode-4314-8917/external-attacher-cfg-volumemode-4314 Jan 20 17:15:14.127: INFO: creating *v1.RoleBinding: volumemode-4314-8917/csi-attacher-role-cfg Jan 20 17:15:14.158: INFO: creating *v1.ServiceAccount: volumemode-4314-8917/csi-provisioner Jan 20 17:15:14.207: INFO: creating *v1.ClusterRole: external-provisioner-runner-volumemode-4314 Jan 20 17:15:14.207: INFO: Define cluster role external-provisioner-runner-volumemode-4314 Jan 20 17:15:14.241: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-volumemode-4314 Jan 20 17:15:14.272: INFO: creating *v1.Role: volumemode-4314-8917/external-provisioner-cfg-volumemode-4314 Jan 20 17:15:14.320: INFO: creating *v1.RoleBinding: volumemode-4314-8917/csi-provisioner-role-cfg Jan 20 17:15:14.352: INFO: creating *v1.ServiceAccount: volumemode-4314-8917/csi-snapshotter Jan 20 17:15:14.382: INFO: creating *v1.ClusterRole: external-snapshotter-runner-volumemode-4314 Jan 20 17:15:14.382: INFO: Define cluster role external-snapshotter-runner-volumemode-4314 Jan 20 17:15:14.418: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-volumemode-4314 Jan 20 17:15:14.452: INFO: creating *v1.Role: volumemode-4314-8917/external-snapshotter-leaderelection-volumemode-4314 Jan 20 17:15:14.483: INFO: creating *v1.RoleBinding: volumemode-4314-8917/external-snapshotter-leaderelection Jan 20 17:15:14.514: INFO: creating *v1.ServiceAccount: volumemode-4314-8917/csi-external-health-monitor-controller Jan 20 17:15:14.545: INFO: creating *v1.ClusterRole: external-health-monitor-controller-runner-volumemode-4314 Jan 20 17:15:14.545: INFO: Define cluster role external-health-monitor-controller-runner-volumemode-4314 Jan 20 17:15:14.583: INFO: creating *v1.ClusterRoleBinding: csi-external-health-monitor-controller-role-volumemode-4314 Jan 20 17:15:14.616: INFO: creating *v1.Role: volumemode-4314-8917/external-health-monitor-controller-cfg-volumemode-4314 Jan 20 17:15:14.648: INFO: creating *v1.RoleBinding: volumemode-4314-8917/csi-external-health-monitor-controller-role-cfg Jan 20 17:15:14.679: INFO: creating *v1.ServiceAccount: volumemode-4314-8917/csi-resizer Jan 20 17:15:14.725: INFO: creating *v1.ClusterRole: external-resizer-runner-volumemode-4314 Jan 20 17:15:14.725: INFO: Define cluster role external-resizer-runner-volumemode-4314 Jan 20 17:15:14.755: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-volumemode-4314 Jan 20 17:15:14.785: INFO: creating *v1.Role: volumemode-4314-8917/external-resizer-cfg-volumemode-4314 Jan 20 17:15:14.820: INFO: creating *v1.RoleBinding: volumemode-4314-8917/csi-resizer-role-cfg Jan 20 17:15:14.854: INFO: creating *v1.CSIDriver: csi-hostpath-volumemode-4314 Jan 20 17:15:14.890: INFO: creating *v1.ServiceAccount: volumemode-4314-8917/csi-hostpathplugin-sa Jan 20 17:15:14.930: INFO: creating *v1.ClusterRoleBinding: csi-hostpathplugin-attacher-cluster-role-volumemode-4314 Jan 20 17:15:14.962: INFO: creating *v1.ClusterRoleBinding: csi-hostpathplugin-health-monitor-controller-cluster-role-volumemode-4314 Jan 20 17:15:15.002: INFO: creating *v1.ClusterRoleBinding: csi-hostpathplugin-provisioner-cluster-role-volumemode-4314 Jan 20 17:15:15.032: INFO: creating *v1.ClusterRoleBinding: csi-hostpathplugin-resizer-cluster-role-volumemode-4314 Jan 20 17:15:15.066: INFO: creating *v1.ClusterRoleBinding: csi-hostpathplugin-snapshotter-cluster-role-volumemode-4314 Jan 20 17:15:15.097: INFO: creating *v1.RoleBinding: volumemode-4314-8917/csi-hostpathplugin-attacher-role Jan 20 17:15:15.127: INFO: creating *v1.RoleBinding: volumemode-4314-8917/csi-hostpathplugin-health-monitor-controller-role Jan 20 17:15:15.157: INFO: creating *v1.RoleBinding: volumemode-4314-8917/csi-hostpathplugin-provisioner-role Jan 20 17:15:15.190: INFO: creating *v1.RoleBinding: volumemode-4314-8917/csi-hostpathplugin-resizer-role Jan 20 17:15:15.222: INFO: creating *v1.RoleBinding: volumemode-4314-8917/csi-hostpathplugin-snapshotter-role Jan 20 17:15:15.251: INFO: creating *v1.StatefulSet: volumemode-4314-8917/csi-hostpathplugin Jan 20 17:15:15.291: INFO: creating *v1.ClusterRoleBinding: psp-csi-hostpath-role-volumemode-4314 Jan 20 17:15:15.321: INFO: Creating resource for dynamic PV Jan 20 17:15:15.321: INFO: Using claimSize:1Mi, test suite supported size:{ 1Mi}, driver(csi-hostpath) supported size:{ 1Mi} �[1mSTEP:�[0m creating a StorageClass volumemode-4314c2lhj �[38;5;243m01/20/23 17:15:15.321�[0m �[1mSTEP:�[0m creating a claim �[38;5;243m01/20/23 17:15:15.357�[0m Jan 20 17:15:15.388: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [csi-hostpathrhtnl] to have phase Bound Jan 20 17:15:15.427: INFO: PersistentVolumeClaim csi-hostpathrhtnl found but phase is Pending instead of Bound. Jan 20 17:15:17.457: INFO: PersistentVolumeClaim csi-hostpathrhtnl found but phase is Pending instead of Bound. Jan 20 17:15:19.486: INFO: PersistentVolumeClaim csi-hostpathrhtnl found and phase=Bound (4.097853336s) �[1mSTEP:�[0m Creating pod �[38;5;243m01/20/23 17:15:19.543�[0m Jan 20 17:15:19.574: INFO: Waiting up to 5m0s for pod "pod-b4b7923a-daf8-4e09-8bc3-1eb6903a407b" in namespace "volumemode-4314" to be "running" Jan 20 17:15:19.603: INFO: Pod "pod-b4b7923a-daf8-4e09-8bc3-1eb6903a407b": Phase="Pending", Reason="", readiness=false. Elapsed: 29.592607ms Jan 20 17:15:21.632: INFO: Pod "pod-b4b7923a-daf8-4e09-8bc3-1eb6903a407b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.058394309s Jan 20 17:15:23.633: INFO: Pod "pod-b4b7923a-daf8-4e09-8bc3-1eb6903a407b": Phase="Running", Reason="", readiness=true. Elapsed: 4.059221896s Jan 20 17:15:23.633: INFO: Pod "pod-b4b7923a-daf8-4e09-8bc3-1eb6903a407b" satisfied condition "running" �[1mSTEP:�[0m Listing mounted volumes in the pod �[38;5;243m01/20/23 17:15:23.694�[0m Jan 20 17:15:23.726: INFO: Waiting up to 5m0s for pod "hostexec-i-048afc59cd0c5fa4a-lnrb6" in namespace "volumemode-4314" to be "running" Jan 20 17:15:23.754: INFO: Pod "hostexec-i-048afc59cd0c5fa4a-lnrb6": Phase="Pending", Reason="", readiness=false. Elapsed: 28.477023ms Jan 20 17:15:25.784: INFO: Pod "hostexec-i-048afc59cd0c5fa4a-lnrb6": Phase="Running", Reason="", readiness=true. Elapsed: 2.058039259s Jan 20 17:15:25.784: INFO: Pod "hostexec-i-048afc59cd0c5fa4a-lnrb6" satisfied condition "running" Jan 20 17:15:25.784: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c test ! -d /var/lib/kubelet/pods/602dc92d-6d17-46cf-82ef-64c9ed287c49/volumes] Namespace:volumemode-4314 PodName:hostexec-i-048afc59cd0c5fa4a-lnrb6 ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jan 20 17:15:25.784: INFO: >>> kubeConfig: /root/.kube/config Jan 20 17:15:25.785: INFO: ExecWithOptions: Clientset creation Jan 20 17:15:25.785: INFO: ExecWithOptions: execute(POST https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/volumemode-4314/pods/hostexec-i-048afc59cd0c5fa4a-lnrb6/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=test+%21+-d+%2Fvar%2Flib%2Fkubelet%2Fpods%2F602dc92d-6d17-46cf-82ef-64c9ed287c49%2Fvolumes&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) Jan 20 17:15:26.035: INFO: exec i-048afc59cd0c5fa4a: command: test ! -d /var/lib/kubelet/pods/602dc92d-6d17-46cf-82ef-64c9ed287c49/volumes Jan 20 17:15:26.035: INFO: exec i-048afc59cd0c5fa4a: stdout: "" Jan 20 17:15:26.036: INFO: exec i-048afc59cd0c5fa4a: stderr: "" Jan 20 17:15:26.036: INFO: exec i-048afc59cd0c5fa4a: exit code: 0 Jan 20 17:15:26.036: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c find /var/lib/kubelet/pods/602dc92d-6d17-46cf-82ef-64c9ed287c49/volumes -mindepth 2 -maxdepth 2] Namespace:volumemode-4314 PodName:hostexec-i-048afc59cd0c5fa4a-lnrb6 ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jan 20 17:15:26.036: INFO: >>> kubeConfig: /root/.kube/config Jan 20 17:15:26.037: INFO: ExecWithOptions: Clientset creation Jan 20 17:15:26.037: INFO: ExecWithOptions: execute(POST https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/volumemode-4314/pods/hostexec-i-048afc59cd0c5fa4a-lnrb6/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=find+%2Fvar%2Flib%2Fkubelet%2Fpods%2F602dc92d-6d17-46cf-82ef-64c9ed287c49%2Fvolumes+-mindepth+2+-maxdepth+2&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) Jan 20 17:15:26.320: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c test ! -d /var/lib/kubelet/pods/602dc92d-6d17-46cf-82ef-64c9ed287c49/volumeDevices] Namespace:volumemode-4314 PodName:hostexec-i-048afc59cd0c5fa4a-lnrb6 ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jan 20 17:15:26.320: INFO: >>> kubeConfig: /root/.kube/config Jan 20 17:15:26.321: INFO: ExecWithOptions: Clientset creation Jan 20 17:15:26.321: INFO: ExecWithOptions: execute(POST https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/volumemode-4314/pods/hostexec-i-048afc59cd0c5fa4a-lnrb6/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=test+%21+-d+%2Fvar%2Flib%2Fkubelet%2Fpods%2F602dc92d-6d17-46cf-82ef-64c9ed287c49%2FvolumeDevices&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) �[1mSTEP:�[0m Checking that volume plugin kubernetes.io/csi is not used in pod directory �[38;5;243m01/20/23 17:15:26.619�[0m �[1mSTEP:�[0m Deleting pod hostexec-i-048afc59cd0c5fa4a-lnrb6 in namespace volumemode-4314 �[38;5;243m01/20/23 17:15:26.619�[0m Jan 20 17:15:26.664: INFO: Deleting pod "pod-b4b7923a-daf8-4e09-8bc3-1eb6903a407b" in namespace "volumemode-4314" Jan 20 17:15:26.696: INFO: Wait up to 5m0s for pod "pod-b4b7923a-daf8-4e09-8bc3-1eb6903a407b" to be fully deleted ERROR: get pod list in volumemode-4314-8917: Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/volumemode-4314-8917/pods": dial tcp 100.26.139.144:443: connect: connection refused ERROR: get pod list in volumemode-4314-8917: Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/volumemode-4314-8917/pods": dial tcp 100.26.139.144:443: connect: connection refused ERROR: get pod list in volumemode-4314-8917: Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/volumemode-4314-8917/pods": dial tcp 100.26.139.144:443: connect: connection refused ERROR: get pod list in volumemode-4314-8917: Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/volumemode-4314-8917/pods": dial tcp 100.26.139.144:443: connect: connection refused ERROR: get pod list in volumemode-4314-8917: Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/volumemode-4314-8917/pods": dial tcp 100.26.139.144:443: connect: connection refused ERROR: get pod list in volumemode-4314-8917: Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/volumemode-4314-8917/pods": dial tcp 100.26.139.144:443: connect: connection refused ERROR: get pod list in volumemode-4314-8917: Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/volumemode-4314-8917/pods": dial tcp 100.26.139.144:443: connect: connection refused ERROR: get pod list in volumemode-4314-8917: Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/volumemode-4314-8917/pods": dial tcp 100.26.139.144:443: connect: connection refused ERROR: get pod list in volumemode-4314-8917: Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/volumemode-4314-8917/pods": dial tcp 100.26.139.144:443: connect: connection refused ERROR: get pod list in volumemode-4314-8917: Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/volumemode-4314-8917/pods": dial tcp 100.26.139.144:443: connect: connection refused ERROR: get pod list in volumemode-4314-8917: Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/volumemode-4314-8917/pods": dial tcp 100.26.139.144:443: connect: connection refused ERROR: get pod list in volumemode-4314-8917: Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/volumemode-4314-8917/pods": dial tcp 100.26.139.144:443: connect: connection refused ERROR: get pod list in volumemode-4314-8917: Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/volumemode-4314-8917/pods": dial tcp 100.26.139.144:443: connect: connection refused ERROR: get pod list in volumemode-4314-8917: Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/volumemode-4314-8917/pods": dial tcp 100.26.139.144:443: connect: connection refused ERROR: get pod list in volumemode-4314-8917: Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/volumemode-4314-8917/pods": dial tcp 100.26.139.144:443: connect: connection refused ERROR: get pod list in volumemode-4314-8917: Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/volumemode-4314-8917/pods": dial tcp 100.26.139.144:443: connect: connection refused ERROR: get pod list in volumemode-4314-8917: Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/volumemode-4314-8917/pods": dial tcp 100.26.139.144:443: connect: connection refused ERROR: get pod list in volumemode-4314-8917: Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/volumemode-4314-8917/pods": dial tcp 100.26.139.144:443: connect: connection refused ERROR: get pod list in volumemode-4314-8917: Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/volumemode-4314-8917/pods": dial tcp 100.26.139.144:443: connect: connection refused ERROR: get pod list in volumemode-4314-8917: Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/volumemode-4314-8917/pods": dial tcp 100.26.139.144:443: connect: connection refused Jan 20 17:15:48.571: INFO: Encountered non-retryable error while getting pod volumemode-4314/pod-b4b7923a-daf8-4e09-8bc3-1eb6903a407b: Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/volumemode-4314/pods/pod-b4b7923a-daf8-4e09-8bc3-1eb6903a407b": dial tcp 100.26.139.144:443: connect: connection refused - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=585, ErrCode=NO_ERROR, debug="" Jan 20 17:15:48.571: INFO: Unexpected error: <*errors.errorString | 0xc00146c3a0>: { s: "pod \"pod-b4b7923a-daf8-4e09-8bc3-1eb6903a407b\" was not deleted: error while waiting for pod volumemode-4314/pod-b4b7923a-daf8-4e09-8bc3-1eb6903a407b not found: Get \"https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/volumemode-4314/pods/pod-b4b7923a-daf8-4e09-8bc3-1eb6903a407b\": dial tcp 100.26.139.144:443: connect: connection refused - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=585, ErrCode=NO_ERROR, debug=\"\"", } Jan 20 17:15:48.571: FAIL: pod "pod-b4b7923a-daf8-4e09-8bc3-1eb6903a407b" was not deleted: error while waiting for pod volumemode-4314/pod-b4b7923a-daf8-4e09-8bc3-1eb6903a407b not found: Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/volumemode-4314/pods/pod-b4b7923a-daf8-4e09-8bc3-1eb6903a407b": dial tcp 100.26.139.144:443: connect: connection refused - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=585, ErrCode=NO_ERROR, debug="" Full Stack Trace k8s.io/kubernetes/test/e2e/storage/testsuites.(*volumeModeTestSuite).DefineTests.func7.1() test/e2e/storage/testsuites/volumemode.go:383 +0x45 k8s.io/kubernetes/test/e2e/storage/testsuites.(*volumeModeTestSuite).DefineTests.func7() test/e2e/storage/testsuites/volumemode.go:416 +0x9e8 ERROR: get pod list in volumemode-4314-8917: Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/volumemode-4314-8917/pods": dial tcp 100.26.139.144:443: connect: connection refused �[1mSTEP:�[0m Deleting pvc �[38;5;243m01/20/23 17:15:48.572�[0m Jan 20 17:15:48.572: INFO: Deleting PersistentVolumeClaim "csi-hostpathrhtnl" Jan 20 17:15:48.613: INFO: Waiting up to 5m0s for PersistentVolume pvc-8a0a7d40-0d2b-4d3c-96bb-d146635832dd to get deleted ERROR: get pod list in volumemode-4314-8917: Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/volumemode-4314-8917/pods": dial tcp 100.26.139.144:443: connect: connection refused ERROR: get pod list in volumemode-4314-8917: Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/volumemode-4314-8917/pods": dial tcp 100.26.139.144:443: connect: connection refused Jan 20 17:15:48.661: INFO: Get persistent volume pvc-8a0a7d40-0d2b-4d3c-96bb-d146635832dd in failed, ignoring for 5s: Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/persistentvolumes/pvc-8a0a7d40-0d2b-4d3c-96bb-d146635832dd": dial tcp 100.26.139.144:443: connect: connection refused ERROR: get pod list in volumemode-4314-8917: Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/volumemode-4314-8917/pods": dial tcp 100.26.139.144:443: connect: connection refused ERROR: get pod list in volumemode-4314-8917: Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/volumemode-4314-8917/pods": dial tcp 100.26.139.144:443: connect: connection refused ERROR: get pod list in volumemode-4314-8917: Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/volumemode-4314-8917/pods": dial tcp 100.26.139.144:443: connect: connection refused ERROR: get pod list in volumemode-4314-8917: Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/volumemode-4314-8917/pods": dial tcp 100.26.139.144:443: connect: connection refused ERROR: get pod list in volumemode-4314-8917: Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/volumemode-4314-8917/pods": dial tcp 100.26.139.144:443: connect: connection refused ERROR: get pod list in volumemode-4314-8917: Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/volumemode-4314-8917/pods": dial tcp 100.26.139.144:443: connect: connection refused ERROR: get pod list in volumemode-4314-8917: Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/volumemode-4314-8917/pods": dial tcp 100.26.139.144:443: connect: connection refused ERROR: get pod list in volumemode-4314-8917: Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/volumemode-4314-8917/pods": dial tcp 100.26.139.144:443: connect: connection refused ERROR: get pod list in volumemode-4314-8917: Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/volumemode-4314-8917/pods": dial tcp 100.26.139.144:443: connect: connection refused ERROR: get pod list in volumemode-4314-8917: Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/volumemode-4314-8917/pods": dial tcp 100.26.139.144:443: connect: connection refused ERROR: get pod list in volumemode-4314-8917: Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/volumemode-4314-8917/pods": dial tcp 100.26.139.144:443: connect: connection refused ERROR: get pod list in volumemode-4314-8917: Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/volumemode-4314-8917/pods": dial tcp 100.26.139.144:443: connect: connection refused ERROR: get pod list in volumemode-4314-8917: Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/volumemode-4314-8917/pods": dial tcp 100.26.139.144:443: connect: connection refused Jan 20 17:16:09.172: INFO: Get persistent volume pvc-8a0a7d40-0d2b-4d3c-96bb-d146635832dd in failed, ignoring for 5s: Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/persistentvolumes/pvc-8a0a7d40-0d2b-4d3c-96bb-d146635832dd": dial tcp 100.26.139.144:443: connect: connection refused Jan 20 17:16:15.420: INFO: PersistentVolume pvc-8a0a7d40-0d2b-4d3c-96bb-d146635832dd found and phase=Bound (26.806937068s) Jan 20 17:16:20.468: INFO: PersistentVolume pvc-8a0a7d40-0d2b-4d3c-96bb-d146635832dd found and phase=Bound (31.855014515s) Jan 20 17:16:25.562: INFO: PersistentVolume pvc-8a0a7d40-0d2b-4d3c-96bb-d146635832dd found and phase=Bound (36.948832344s) Jan 20 17:16:30.660: INFO: PersistentVolume pvc-8a0a7d40-0d2b-4d3c-96bb-d146635832dd found and phase=Bound (42.047137184s) Jan 20 17:16:35.761: INFO: PersistentVolume pvc-8a0a7d40-0d2b-4d3c-96bb-d146635832dd found and phase=Bound (47.148415388s) Jan 20 17:16:40.859: INFO: PersistentVolume pvc-8a0a7d40-0d2b-4d3c-96bb-d146635832dd found and phase=Bound (52.246596421s) Jan 20 17:16:45.910: INFO: PersistentVolume pvc-8a0a7d40-0d2b-4d3c-96bb-d146635832dd found and phase=Bound (57.297262415s) Jan 20 17:16:50.959: INFO: PersistentVolume pvc-8a0a7d40-0d2b-4d3c-96bb-d146635832dd found and phase=Bound (1m2.346600072s) Jan 20 17:16:56.060: INFO: PersistentVolume pvc-8a0a7d40-0d2b-4d3c-96bb-d146635832dd found and phase=Bound (1m7.446966725s) Jan 20 17:17:01.113: INFO: PersistentVolume pvc-8a0a7d40-0d2b-4d3c-96bb-d146635832dd found and phase=Bound (1m12.49997136s) Jan 20 17:17:06.209: INFO: PersistentVolume pvc-8a0a7d40-0d2b-4d3c-96bb-d146635832dd found and phase=Bound (1m17.596460829s) Jan 20 17:17:11.319: INFO: PersistentVolume pvc-8a0a7d40-0d2b-4d3c-96bb-d146635832dd found and phase=Bound (1m22.706655979s) Jan 20 17:17:16.414: INFO: PersistentVolume pvc-8a0a7d40-0d2b-4d3c-96bb-d146635832dd found and phase=Bound (1m27.80119747s) Jan 20 17:17:21.513: INFO: PersistentVolume pvc-8a0a7d40-0d2b-4d3c-96bb-d146635832dd found and phase=Bound (1m32.900028496s) Jan 20 17:17:26.617: INFO: PersistentVolume pvc-8a0a7d40-0d2b-4d3c-96bb-d146635832dd found and phase=Bound (1m38.003805791s) Jan 20 17:17:31.710: INFO: PersistentVolume pvc-8a0a7d40-0d2b-4d3c-96bb-d146635832dd found and phase=Bound (1m43.097320048s) Jan 20 17:17:36.809: INFO: PersistentVolume pvc-8a0a7d40-0d2b-4d3c-96bb-d146635832dd found and phase=Bound (1m48.196609883s) Jan 20 17:17:41.860: INFO: PersistentVolume pvc-8a0a7d40-0d2b-4d3c-96bb-d146635832dd found and phase=Bound (1m53.246758863s) Jan 20 17:17:46.910: INFO: PersistentVolume pvc-8a0a7d40-0d2b-4d3c-96bb-d146635832dd found and phase=Bound (1m58.297392655s) Jan 20 17:17:52.010: INFO: PersistentVolume pvc-8a0a7d40-0d2b-4d3c-96bb-d146635832dd found and phase=Bound (2m3.397009888s) Jan 20 17:17:57.059: INFO: PersistentVolume pvc-8a0a7d40-0d2b-4d3c-96bb-d146635832dd found and phase=Bound (2m8.446345557s) Jan 20 17:18:02.110: INFO: PersistentVolume pvc-8a0a7d40-0d2b-4d3c-96bb-d146635832dd found and phase=Bound (2m13.497561399s) Jan 20 17:18:07.209: INFO: PersistentVolume pvc-8a0a7d40-0d2b-4d3c-96bb-d146635832dd found and phase=Bound (2m18.596101199s) Jan 20 17:18:12.260: INFO: PersistentVolume pvc-8a0a7d40-0d2b-4d3c-96bb-d146635832dd found and phase=Bound (2m23.647068069s) Jan 20 17:18:17.309: INFO: PersistentVolume pvc-8a0a7d40-0d2b-4d3c-96bb-d146635832dd found and phase=Bound (2m28.696438118s) Jan 20 17:18:22.410: INFO: PersistentVolume pvc-8a0a7d40-0d2b-4d3c-96bb-d146635832dd found and phase=Bound (2m33.797443694s) Jan 20 17:18:27.460: INFO: PersistentVolume pvc-8a0a7d40-0d2b-4d3c-96bb-d146635832dd found and phase=Bound (2m38.847629204s) Jan 20 17:18:32.559: INFO: PersistentVolume pvc-8a0a7d40-0d2b-4d3c-96bb-d146635832dd found and phase=Bound (2m43.946692534s) Jan 20 17:18:37.659: INFO: PersistentVolume pvc-8a0a7d40-0d2b-4d3c-96bb-d146635832dd found and phase=Bound (2m49.046506522s) Jan 20 17:18:42.710: INFO: PersistentVolume pvc-8a0a7d40-0d2b-4d3c-96bb-d146635832dd found and phase=Bound (2m54.096751557s) Jan 20 17:18:47.762: INFO: PersistentVolume pvc-8a0a7d40-0d2b-4d3c-96bb-d146635832dd found and phase=Bound (2m59.149512849s) Jan 20 17:18:52.860: INFO: PersistentVolume pvc-8a0a7d40-0d2b-4d3c-96bb-d146635832dd found and phase=Bound (3m4.247123517s) Jan 20 17:18:57.960: INFO: PersistentVolume pvc-8a0a7d40-0d2b-4d3c-96bb-d146635832dd found and phase=Bound (3m9.34704056s) Jan 20 17:19:03.060: INFO: PersistentVolume pvc-8a0a7d40-0d2b-4d3c-96bb-d146635832dd found and phase=Bound (3m14.447067117s) Jan 20 17:19:08.109: INFO: PersistentVolume pvc-8a0a7d40-0d2b-4d3c-96bb-d146635832dd found and phase=Bound (3m19.496670516s) Jan 20 17:19:13.159: INFO: PersistentVolume pvc-8a0a7d40-0d2b-4d3c-96bb-d146635832dd found and phase=Bound (3m24.546429152s) Jan 20 17:19:18.210: INFO: PersistentVolume pvc-8a0a7d40-0d2b-4d3c-96bb-d146635832dd found and phase=Bound (3m29.59699875s) Jan 20 17:19:23.310: INFO: PersistentVolume pvc-8a0a7d40-0d2b-4d3c-96bb-d146635832dd found and phase=Bound (3m34.696922457s) Jan 20 17:19:28.409: INFO: PersistentVolume pvc-8a0a7d40-0d2b-4d3c-96bb-d146635832dd found and phase=Bound (3m39.796559011s) Jan 20 17:19:33.461: INFO: PersistentVolume pvc-8a0a7d40-0d2b-4d3c-96bb-d146635832dd found and phase=Bound (3m44.847950231s) Jan 20 17:19:38.559: INFO: PersistentVolume pvc-8a0a7d40-0d2b-4d3c-96bb-d146635832dd found and phase=Bound (3m49.946523866s) Jan 20 17:19:43.610: INFO: PersistentVolume pvc-8a0a7d40-0d2b-4d3c-96bb-d146635832dd found and phase=Bound (3m54.997643262s) Jan 20 17:19:48.710: INFO: PersistentVolume pvc-8a0a7d40-0d2b-4d3c-96bb-d146635832dd found and phase=Bound (4m0.096888419s) Jan 20 17:19:53.760: INFO: PersistentVolume pvc-8a0a7d40-0d2b-4d3c-96bb-d146635832dd found and phase=Bound (4m5.14720674s) Jan 20 17:19:58.859: INFO: PersistentVolume pvc-8a0a7d40-0d2b-4d3c-96bb-d146635832dd found and phase=Bound (4m10.246691061s) Jan 20 17:20:03.910: INFO: PersistentVolume pvc-8a0a7d40-0d2b-4d3c-96bb-d146635832dd found and phase=Bound (4m15.29718388s) Jan 20 17:20:09.015: INFO: PersistentVolume pvc-8a0a7d40-0d2b-4d3c-96bb-d146635832dd found and phase=Bound (4m20.401999577s) Jan 20 17:20:14.113: INFO: PersistentVolume pvc-8a0a7d40-0d2b-4d3c-96bb-d146635832dd found and phase=Bound (4m25.500149267s) Jan 20 17:20:19.210: INFO: PersistentVolume pvc-8a0a7d40-0d2b-4d3c-96bb-d146635832dd found and phase=Bound (4m30.597283617s) Jan 20 17:20:24.260: INFO: PersistentVolume pvc-8a0a7d40-0d2b-4d3c-96bb-d146635832dd found and phase=Bound (4m35.646997472s) Jan 20 17:20:29.311: INFO: PersistentVolume pvc-8a0a7d40-0d2b-4d3c-96bb-d146635832dd found and phase=Bound (4m40.698377708s) Jan 20 17:20:34.409: INFO: PersistentVolume pvc-8a0a7d40-0d2b-4d3c-96bb-d146635832dd found and phase=Bound (4m45.796567705s) Jan 20 17:20:39.511: INFO: PersistentVolume pvc-8a0a7d40-0d2b-4d3c-96bb-d146635832dd found and phase=Bound (4m50.898048751s) Jan 20 17:20:44.611: INFO: PersistentVolume pvc-8a0a7d40-0d2b-4d3c-96bb-d146635832dd found and phase=Bound (4m55.998016282s) �[1mSTEP:�[0m Deleting sc �[38;5;243m01/20/23 17:20:49.611�[0m Jan 20 17:20:49.710: INFO: Unexpected error: while cleaning up resource: <errors.aggregate | len:1, cap:1>: [ <errors.aggregate | len:2, cap:2>[ <*fmt.wrapError | 0xc004955b00>{ msg: "failed to delete PVC csi-hostpathrhtnl: PVC Delete API error: Delete \"https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/volumemode-4314/persistentvolumeclaims/csi-hostpathrhtnl\": dial tcp 100.26.139.144:443: connect: connection refused", err: <*errors.errorString | 0xc00146c9d0>{ s: "PVC Delete API error: Delete \"https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/volumemode-4314/persistentvolumeclaims/csi-hostpathrhtnl\": dial tcp 100.26.139.144:443: connect: connection refused", }, }, <*fmt.wrapError | 0xc003212a60>{ msg: "persistent Volume pvc-8a0a7d40-0d2b-4d3c-96bb-d146635832dd not deleted by dynamic provisioner: PersistentVolume pvc-8a0a7d40-0d2b-4d3c-96bb-d146635832dd still exists within 5m0s", err: <*errors.errorString | 0xc001326820>{ s: "PersistentVolume pvc-8a0a7d40-0d2b-4d3c-96bb-d146635832dd still exists within 5m0s", }, }, ], ] Jan 20 17:20:49.710: FAIL: while cleaning up resource: [failed to delete PVC csi-hostpathrhtnl: PVC Delete API error: Delete "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/volumemode-4314/persistentvolumeclaims/csi-hostpathrhtnl": dial tcp 100.26.139.144:443: connect: connection refused, persistent Volume pvc-8a0a7d40-0d2b-4d3c-96bb-d146635832dd not deleted by dynamic provisioner: PersistentVolume pvc-8a0a7d40-0d2b-4d3c-96bb-d146635832dd still exists within 5m0s] Full Stack Trace k8s.io/kubernetes/test/e2e/storage/testsuites.(*volumeModeTestSuite).DefineTests.func3() test/e2e/storage/testsuites/volumemode.go:190 +0x1ee panic({0x70efe60, 0xc004aad490}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/test/e2e/framework.Fail({0xc004ad8600, 0x1fe}, {0xc004455b18?, 0xc004ad8600?, 0xc004455b40?}) test/e2e/framework/log.go:61 +0x145 k8s.io/kubernetes/test/e2e/framework.ExpectNoErrorWithOffset(0x1, {0x7fa8f20, 0xc00146c3a0}, {0x0?, 0x6206c66?, 0xc0013be570?}) test/e2e/framework/expect.go:76 +0x267 k8s.io/kubernetes/test/e2e/framework.ExpectNoError(...) test/e2e/framework/expect.go:43 k8s.io/kubernetes/test/e2e/storage/testsuites.(*volumeModeTestSuite).DefineTests.func7.1() test/e2e/storage/testsuites/volumemode.go:383 +0x45 k8s.io/kubernetes/test/e2e/storage/testsuites.(*volumeModeTestSuite).DefineTests.func7() test/e2e/storage/testsuites/volumemode.go:416 +0x9e8 [AfterEach] [Testpattern: Dynamic PV (block volmode)] volumeMode test/e2e/framework/node/init/init.go:32 Jan 20 17:20:49.711: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [Testpattern: Dynamic PV (block volmode)] volumeMode test/e2e/storage/drivers/csi.go:289 �[1mSTEP:�[0m deleting the test namespace: volumemode-4314 �[38;5;243m01/20/23 17:20:49.811�[0m �[1mSTEP:�[0m Collecting events from namespace "volumemode-4314". �[38;5;243m01/20/23 17:20:49.811�[0m �[1mSTEP:�[0m Found 15 events. �[38;5;243m01/20/23 17:20:49.91�[0m Jan 20 17:20:49.910: INFO: At 2023-01-20 17:15:15 +0000 UTC - event for csi-hostpathrhtnl: {persistentvolume-controller } ExternalProvisioning: waiting for a volume to be created, either by external provisioner "csi-hostpath-volumemode-4314" or manually created by system administrator Jan 20 17:20:49.910: INFO: At 2023-01-20 17:15:18 +0000 UTC - event for csi-hostpathrhtnl: {csi-hostpath-volumemode-4314_csi-hostpathplugin-0_59df643c-32f4-401a-9f3c-77486371b2ad } Provisioning: External provisioner is provisioning volume for claim "volumemode-4314/csi-hostpathrhtnl" Jan 20 17:20:49.910: INFO: At 2023-01-20 17:15:18 +0000 UTC - event for csi-hostpathrhtnl: {csi-hostpath-volumemode-4314_csi-hostpathplugin-0_59df643c-32f4-401a-9f3c-77486371b2ad } ProvisioningFailed: failed to provision volume with StorageClass "volumemode-4314c2lhj": error generating accessibility requirements: no available topology found Jan 20 17:20:49.910: INFO: At 2023-01-20 17:15:19 +0000 UTC - event for csi-hostpathrhtnl: {csi-hostpath-volumemode-4314_csi-hostpathplugin-0_59df643c-32f4-401a-9f3c-77486371b2ad } ProvisioningSucceeded: Successfully provisioned volume pvc-8a0a7d40-0d2b-4d3c-96bb-d146635832dd Jan 20 17:20:49.910: INFO: At 2023-01-20 17:15:19 +0000 UTC - event for pod-b4b7923a-daf8-4e09-8bc3-1eb6903a407b: {default-scheduler } Scheduled: Successfully assigned volumemode-4314/pod-b4b7923a-daf8-4e09-8bc3-1eb6903a407b to i-048afc59cd0c5fa4a Jan 20 17:20:49.910: INFO: At 2023-01-20 17:15:20 +0000 UTC - event for pod-b4b7923a-daf8-4e09-8bc3-1eb6903a407b: {attachdetach-controller } SuccessfulAttachVolume: AttachVolume.Attach succeeded for volume "pvc-8a0a7d40-0d2b-4d3c-96bb-d146635832dd" Jan 20 17:20:49.910: INFO: At 2023-01-20 17:15:20 +0000 UTC - event for pod-b4b7923a-daf8-4e09-8bc3-1eb6903a407b: {kubelet i-048afc59cd0c5fa4a} Pulled: Container image "registry.k8s.io/e2e-test-images/busybox:1.29-4" already present on machine Jan 20 17:20:49.910: INFO: At 2023-01-20 17:15:20 +0000 UTC - event for pod-b4b7923a-daf8-4e09-8bc3-1eb6903a407b: {kubelet i-048afc59cd0c5fa4a} Created: Created container write-pod Jan 20 17:20:49.910: INFO: At 2023-01-20 17:15:21 +0000 UTC - event for pod-b4b7923a-daf8-4e09-8bc3-1eb6903a407b: {kubelet i-048afc59cd0c5fa4a} Started: Started container write-pod Jan 20 17:20:49.910: INFO: At 2023-01-20 17:15:23 +0000 UTC - event for hostexec-i-048afc59cd0c5fa4a-lnrb6: {default-scheduler } Scheduled: Successfully assigned volumemode-4314/hostexec-i-048afc59cd0c5fa4a-lnrb6 to i-048afc59cd0c5fa4a Jan 20 17:20:49.910: INFO: At 2023-01-20 17:15:24 +0000 UTC - event for hostexec-i-048afc59cd0c5fa4a-lnrb6: {kubelet i-048afc59cd0c5fa4a} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.43" already present on machine Jan 20 17:20:49.910: INFO: At 2023-01-20 17:15:24 +0000 UTC - event for hostexec-i-048afc59cd0c5fa4a-lnrb6: {kubelet i-048afc59cd0c5fa4a} Created: Created container agnhost-container Jan 20 17:20:49.910: INFO: At 2023-01-20 17:15:24 +0000 UTC - event for hostexec-i-048afc59cd0c5fa4a-lnrb6: {kubelet i-048afc59cd0c5fa4a} Started: Started container agnhost-container Jan 20 17:20:49.910: INFO: At 2023-01-20 17:15:26 +0000 UTC - event for hostexec-i-048afc59cd0c5fa4a-lnrb6: {kubelet i-048afc59cd0c5fa4a} Killing: Stopping container agnhost-container Jan 20 17:20:49.910: INFO: At 2023-01-20 17:15:26 +0000 UTC - event for pod-b4b7923a-daf8-4e09-8bc3-1eb6903a407b: {kubelet i-048afc59cd0c5fa4a} Killing: Stopping container write-pod Jan 20 17:20:50.009: INFO: POD NODE PHASE GRACE CONDITIONS Jan 20 17:20:50.009: INFO: Jan 20 17:20:50.113: INFO: Logging node info for node i-02cae73514916eb60 Jan 20 17:20:50.210: INFO: Node Info: &Node{ObjectMeta:{i-02cae73514916eb60 6d0a8063-275e-4cb5-a7e1-ecf07fb2d810 6920 0 2023-01-20 17:06:32 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:c5.large beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:us-east-1 failure-domain.beta.kubernetes.io/zone:us-east-1a kops.k8s.io/kops-controller-pki: kubernetes.io/arch:amd64 kubernetes.io/hostname:i-02cae73514916eb60 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node.kubernetes.io/exclude-from-external-load-balancers: node.kubernetes.io/instance-type:c5.large topology.ebs.csi.aws.com/zone:us-east-1a topology.kubernetes.io/region:us-east-1 topology.kubernetes.io/zone:us-east-1a] map[csi.volume.kubernetes.io/nodeid:{"ebs.csi.aws.com":"i-02cae73514916eb60"} flannel.alpha.coreos.com/backend-data:{"VNI":1,"VtepMAC":"e6:28:1d:38:9c:ba"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:172.20.51.65 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-20 17:06:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}}} } {kube-controller-manager Update v1 2023-01-20 17:07:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.0.0/24\"":{}}}} } {protokube Update v1 2023-01-20 17:07:01 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kops.k8s.io/kops-controller-pki":{},"f:node-role.kubernetes.io/control-plane":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}} } {aws-cloud-controller-manager Update v1 2023-01-20 17:07:06 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:beta.kubernetes.io/instance-type":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:taints":{}}} } {aws-cloud-controller-manager Update v1 2023-01-20 17:07:06 +0000 UTC FieldsV1 {"f:status":{"f:addresses":{"k:{\"type\":\"ExternalDNS\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"ExternalIP\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"Hostname\"}":{"f:address":{}},"k:{\"type\":\"InternalDNS\"}":{".":{},"f:address":{},"f:type":{}}}}} status} {kubelet Update v1 2023-01-20 17:16:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{}}},"f:status":{"f:allocatable":{"f:memory":{}},"f:capacity":{"f:memory":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{},"f:kernelVersion":{},"f:osImage":{}}}} status} {flanneld Update v1 2023-01-20 17:16:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status}]},Spec:NodeSpec{PodCIDR:100.96.0.0/24,DoNotUseExternalID:,ProviderID:aws:///us-east-1a/i-02cae73514916eb60,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/control-plane,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[100.96.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{47441653760 0} {<nil>} 46329740Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3895427072 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{42697488314 0} {<nil>} 42697488314 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3790569472 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-20 17:16:23 +0000 UTC,LastTransitionTime:2023-01-20 17:16:23 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-20 17:16:15 +0000 UTC,LastTransitionTime:2023-01-20 17:06:30 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-20 17:16:15 +0000 UTC,LastTransitionTime:2023-01-20 17:06:30 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-20 17:16:15 +0000 UTC,LastTransitionTime:2023-01-20 17:06:30 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-20 17:16:15 +0000 UTC,LastTransitionTime:2023-01-20 17:16:15 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.51.65,},NodeAddress{Type:ExternalIP,Address:100.26.139.144,},NodeAddress{Type:InternalDNS,Address:i-02cae73514916eb60.ec2.internal,},NodeAddress{Type:Hostname,Address:i-02cae73514916eb60.ec2.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-100-26-139-144.compute-1.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec26604dd0f376256ae652d6e661c235,SystemUUID:ec26604d-d0f3-7625-6ae6-52d6e661c235,BootID:a089a900-b2da-4d1d-8de1-3fdf21e97305,KernelVersion:5.15.86-flatcar,OSImage:Flatcar Container Linux by Kinvolk 3446.1.0 (Oklo),ContainerRuntimeVersion:containerd://1.6.10,KubeletVersion:v1.26.1,KubeProxyVersion:v1.26.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/etcdadm/etcd-manager@sha256:66a453db625abb268f4b3bbefc5a34a171d81e6e8796cecca54cfd71775c77c4 registry.k8s.io/etcdadm/etcd-manager:v3.0.20221209],SizeBytes:231502799,},ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.26.1],SizeBytes:135178704,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.26.1],SizeBytes:124995897,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.26.1],SizeBytes:67205316,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.26.1],SizeBytes:57661752,},ContainerImage{Names:[registry.k8s.io/kops/kops-controller:1.26.0-beta.2],SizeBytes:43191763,},ContainerImage{Names:[registry.k8s.io/kops/dns-controller:1.26.0-beta.2],SizeBytes:42821714,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:f0c5de192d832e7c1daa6580d4a62e8fa6fc8eabc0917ae4cb7ed4d15e95b59e registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.1],SizeBytes:29725845,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:122bfb8c1edabb3c0edd63f06523e6940d958d19b3957dc7b1d6f81e9f1f6119 registry.k8s.io/sig-storage/csi-provisioner:v3.1.0],SizeBytes:23345856,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:9ebbf9f023e7b41ccee3d52afe39a89e3ddacdbb69269d583abfc25847cfd9e4 registry.k8s.io/sig-storage/csi-resizer:v1.4.0],SizeBytes:22381475,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:8b9c313c05f54fb04f8d430896f5f5904b6cb157df261501b29adc04d2b2dc7b registry.k8s.io/sig-storage/csi-attacher:v3.4.0],SizeBytes:22085298,},ContainerImage{Names:[docker.io/rancher/mirrored-flannelcni-flannel@sha256:c9786f434d4663c924aeca1a2e479786d63df0d56c5d6bd62a64915f81d62ff0 docker.io/rancher/mirrored-flannelcni-flannel:v0.19.2],SizeBytes:20503771,},ContainerImage{Names:[registry.k8s.io/provider-aws/cloud-controller-manager@sha256:fdeb61e3e42ecd9cca868d550ebdb88dd6341d9e91fcfa9a37e227dab2ad22cb registry.k8s.io/provider-aws/cloud-controller-manager:v1.26.0],SizeBytes:20154862,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:8240918,},ContainerImage{Names:[registry.k8s.io/kops/kube-apiserver-healthcheck:1.26.0-beta.2],SizeBytes:4965793,},ContainerImage{Names:[docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin@sha256:28d3a6be9f450282bf42e4dad143d41da23e3d91f66f19c01ee7fd21fd17cb2b docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.0],SizeBytes:3821285,},ContainerImage{Names:[registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 20 17:20:50.210: INFO: Logging kubelet events for node i-02cae73514916eb60 Jan 20 17:20:50.314: INFO: Logging pods the kubelet thinks is on node i-02cae73514916eb60 Jan 20 17:20:50.416: INFO: etcd-manager-events-i-02cae73514916eb60 started at 2023-01-20 17:16:08 +0000 UTC (0+1 container statuses recorded) Jan 20 17:20:50.416: INFO: Container etcd-manager ready: true, restart count 1 Jan 20 17:20:50.416: INFO: kube-scheduler-i-02cae73514916eb60 started at 2023-01-20 17:16:08 +0000 UTC (0+1 container statuses recorded) Jan 20 17:20:50.416: INFO: Container kube-scheduler ready: true, restart count 1 Jan 20 17:20:50.416: INFO: ebs-csi-node-lfls8 started at 2023-01-20 17:06:58 +0000 UTC (0+3 container statuses recorded) Jan 20 17:20:50.416: INFO: Container ebs-plugin ready: true, restart count 1 Jan 20 17:20:50.416: INFO: Container liveness-probe ready: true, restart count 1 Jan 20 17:20:50.416: INFO: Container node-driver-registrar ready: true, restart count 1 Jan 20 17:20:50.416: INFO: kube-flannel-ds-5nkqq started at 2023-01-20 17:06:58 +0000 UTC (2+1 container statuses recorded) Jan 20 17:20:50.416: INFO: Init container install-cni-plugin ready: true, restart count 1 Jan 20 17:20:50.416: INFO: Init container install-cni ready: true, restart count 0 Jan 20 17:20:50.416: INFO: Container kube-flannel ready: true, restart count 1 Jan 20 17:20:50.416: INFO: kops-controller-mqtlq started at 2023-01-20 17:07:01 +0000 UTC (0+1 container statuses recorded) Jan 20 17:20:50.416: INFO: Container kops-controller ready: true, restart count 2 Jan 20 17:20:50.416: INFO: etcd-manager-main-i-02cae73514916eb60 started at 2023-01-20 17:16:08 +0000 UTC (0+1 container statuses recorded) Jan 20 17:20:50.416: INFO: Container etcd-manager ready: true, restart count 1 Jan 20 17:20:50.416: INFO: kube-apiserver-i-02cae73514916eb60 started at 2023-01-20 17:16:08 +0000 UTC (0+2 container statuses recorded) Jan 20 17:20:50.416: INFO: Container healthcheck ready: true, restart count 1 Jan 20 17:20:50.416: INFO: Container kube-apiserver ready: true, restart count 2 Jan 20 17:20:50.416: INFO: kube-controller-manager-i-02cae73514916eb60 started at 2023-01-20 17:06:00 +0000 UTC (0+1 container statuses recorded) Jan 20 17:20:50.416: INFO: Container kube-controller-manager ready: true, restart count 4 Jan 20 17:20:50.416: INFO: kube-proxy-i-02cae73514916eb60 started at 2023-01-20 17:16:08 +0000 UTC (0+1 container statuses recorded) Jan 20 17:20:50.416: INFO: Container kube-proxy ready: true, restart count 1 Jan 20 17:20:50.416: INFO: dns-controller-74d4646d88-p7zxr started at 2023-01-20 17:07:01 +0000 UTC (0+1 container statuses recorded) Jan 20 17:20:50.416: INFO: Container dns-controller ready: true, restart count 1 Jan 20 17:20:50.416: INFO: ebs-csi-controller-c9fc69cf5-kn566 started at 2023-01-20 17:07:01 +0000 UTC (0+5 container statuses recorded) Jan 20 17:20:50.416: INFO: Container csi-attacher ready: true, restart count 2 Jan 20 17:20:50.416: INFO: Container csi-provisioner ready: true, restart count 2 Jan 20 17:20:50.416: INFO: Container csi-resizer ready: true, restart count 1 Jan 20 17:20:50.416: INFO: Container ebs-plugin ready: true, restart count 1 Jan 20 17:20:50.416: INFO: Container liveness-probe ready: true, restart count 1 Jan 20 17:20:50.416: INFO: aws-cloud-controller-manager-2qgs4 started at 2023-01-20 17:07:01 +0000 UTC (0+1 container statuses recorded) Jan 20 17:20:50.416: INFO: Container aws-cloud-controller-manager ready: true, restart count 2 Jan 20 17:20:50.757: INFO: Latency metrics for node i-02cae73514916eb60 Jan 20 17:20:50.757: INFO: Logging node info for node i-03af3dbca738ba168 Jan 20 17:20:50.810: INFO: Node Info: &Node{ObjectMeta:{i-03af3dbca738ba168 f2b83166-36e9-4e14-8fe3-7e4da5f5a758 15552 0 2023-01-20 17:07:52 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:us-east-1 failure-domain.beta.kubernetes.io/zone:us-east-1a kubernetes.io/arch:amd64 kubernetes.io/hostname:i-03af3dbca738ba168 kubernetes.io/os:linux node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:us-east-1a topology.hostpath.csi/node:i-03af3dbca738ba168 topology.kubernetes.io/region:us-east-1 topology.kubernetes.io/zone:us-east-1a] map[csi.volume.kubernetes.io/nodeid:{"ebs.csi.aws.com":"i-03af3dbca738ba168"} flannel.alpha.coreos.com/backend-data:{"VNI":1,"VtepMAC":"ea:9a:cb:28:29:d0"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:172.20.58.114 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{aws-cloud-controller-manager Update v1 2023-01-20 17:07:52 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:beta.kubernetes.io/instance-type":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {aws-cloud-controller-manager Update v1 2023-01-20 17:07:52 +0000 UTC FieldsV1 {"f:status":{"f:addresses":{"k:{\"type\":\"ExternalDNS\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"ExternalIP\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"Hostname\"}":{"f:address":{}},"k:{\"type\":\"InternalDNS\"}":{".":{},"f:address":{},"f:type":{}}}}} status} {kops-controller Update v1 2023-01-20 17:07:52 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:node-role.kubernetes.io/node":{}}}} } {kubelet Update v1 2023-01-20 17:07:52 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}}} } {kube-controller-manager Update v1 2023-01-20 17:18:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.2.0/24\"":{}}}} } {flanneld Update v1 2023-01-20 17:18:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2023-01-20 17:20:33 +0000 UTC FieldsV1 {"f:status":{"f:volumesAttached":{}}} status} {kubelet Update v1 2023-01-20 17:20:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:allocatable":{"f:memory":{}},"f:capacity":{"f:memory":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{},"f:kernelVersion":{},"f:osImage":{}},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.2.0/24,DoNotUseExternalID:,ProviderID:aws:///us-east-1a/i-03af3dbca738ba168,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{47441653760 0} {<nil>} 46329740Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4054806528 0} {<nil>} 3959772Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{42697488314 0} {<nil>} 42697488314 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3949948928 0} {<nil>} 3857372Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-20 17:18:26 +0000 UTC,LastTransitionTime:2023-01-20 17:18:26 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-20 17:20:33 +0000 UTC,LastTransitionTime:2023-01-20 17:07:42 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-20 17:20:33 +0000 UTC,LastTransitionTime:2023-01-20 17:07:42 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-20 17:20:33 +0000 UTC,LastTransitionTime:2023-01-20 17:07:42 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-20 17:20:33 +0000 UTC,LastTransitionTime:2023-01-20 17:18:21 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.58.114,},NodeAddress{Type:ExternalIP,Address:54.92.220.56,},NodeAddress{Type:InternalDNS,Address:i-03af3dbca738ba168.ec2.internal,},NodeAddress{Type:Hostname,Address:i-03af3dbca738ba168.ec2.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-54-92-220-56.compute-1.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec2a474c9a9b98f9bdaf7a97ffdf305e,SystemUUID:ec2a474c-9a9b-98f9-bdaf-7a97ffdf305e,BootID:67cb1ab9-8c0f-4a0e-aa27-d7cde3225458,KernelVersion:5.15.86-flatcar,OSImage:Flatcar Container Linux by Kinvolk 3446.1.0 (Oklo),ContainerRuntimeVersion:containerd://1.6.10,KubeletVersion:v1.26.1,KubeProxyVersion:v1.26.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c registry.k8s.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.26.1],SizeBytes:67205316,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:f0c5de192d832e7c1daa6580d4a62e8fa6fc8eabc0917ae4cb7ed4d15e95b59e registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.1],SizeBytes:29725845,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/sample-apiserver@sha256:8d70890151aa5d096f331cb9da1b9cd5be0412b7363fe67b5c3befdcaa2a28d0 registry.k8s.io/e2e-test-images/sample-apiserver:1.17.7],SizeBytes:25667066,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[docker.io/rancher/mirrored-flannelcni-flannel@sha256:c9786f434d4663c924aeca1a2e479786d63df0d56c5d6bd62a64915f81d62ff0 docker.io/rancher/mirrored-flannelcni-flannel:v0.19.2],SizeBytes:20503771,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf registry.k8s.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:18651485,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:8240918,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:5c99cf6a02adda929b10321dbf4ecfa00d87be9ba4fb456006237d530ab4baa1 registry.k8s.io/e2e-test-images/nginx:1.14-4],SizeBytes:6978614,},ContainerImage{Names:[docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin@sha256:28d3a6be9f450282bf42e4dad143d41da23e3d91f66f19c01ee7fd21fd17cb2b docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.0],SizeBytes:3821285,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[kubernetes.io/csi/ebs.csi.aws.com^vol-0e32dc9872409b22a],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/ebs.csi.aws.com^vol-0e32dc9872409b22a,DevicePath:,},},Config:nil,},} Jan 20 17:20:50.810: INFO: Logging kubelet events for node i-03af3dbca738ba168 Jan 20 17:20:50.913: INFO: Logging pods the kubelet thinks is on node i-03af3dbca738ba168 Jan 20 17:20:51.020: INFO: kube-proxy-i-03af3dbca738ba168 started at 2023-01-20 17:07:42 +0000 UTC (0+1 container statuses recorded) Jan 20 17:20:51.020: INFO: Container kube-proxy ready: true, restart count 1 Jan 20 17:20:51.020: INFO: pod2 started at <nil> (0+0 container statuses recorded) Jan 20 17:20:51.020: INFO: coredns-559769c974-6f8t8 started at 2023-01-20 17:08:35 +0000 UTC (0+1 container statuses recorded) Jan 20 17:20:51.020: INFO: Container coredns ready: true, restart count 1 Jan 20 17:20:51.020: INFO: test-rollover-deployment-6c6df9974f-x6f4f started at 2023-01-20 17:20:47 +0000 UTC (0+1 container statuses recorded) Jan 20 17:20:51.020: INFO: Container agnhost ready: false, restart count 0 Jan 20 17:20:51.020: INFO: boom-server started at 2023-01-20 17:14:35 +0000 UTC (0+1 container statuses recorded) Jan 20 17:20:51.020: INFO: Container boom-server ready: false, restart count 0 Jan 20 17:20:51.020: INFO: hostexec-i-03af3dbca738ba168-q6k7b started at 2023-01-20 17:14:55 +0000 UTC (0+1 container statuses recorded) Jan 20 17:20:51.020: INFO: Container agnhost-container ready: false, restart count 0 Jan 20 17:20:51.020: INFO: netserver-0 started at 2023-01-20 17:15:26 +0000 UTC (0+1 container statuses recorded) Jan 20 17:20:51.020: INFO: Container webserver ready: false, restart count 0 Jan 20 17:20:51.020: INFO: simpletest.deployment-7cf4fd9d8f-l6vlr started at 2023-01-20 17:20:46 +0000 UTC (0+1 container statuses recorded) Jan 20 17:20:51.020: INFO: Container nginx ready: false, restart count 0 Jan 20 17:20:51.020: INFO: sample-crd-conversion-webhook-deployment-74ff66dd47-s6hlx started at 2023-01-20 17:20:47 +0000 UTC (0+1 container statuses recorded) Jan 20 17:20:51.020: INFO: Container sample-crd-conversion-webhook ready: false, restart count 0 Jan 20 17:20:51.020: INFO: service-proxy-disabled-x6wst started at 2023-01-20 17:14:55 +0000 UTC (0+1 container statuses recorded) Jan 20 17:20:51.020: INFO: Container service-proxy-disabled ready: false, restart count 0 Jan 20 17:20:51.020: INFO: pod-secrets-fd7b064c-e96c-471e-b950-844fb9f44612 started at 2023-01-20 17:20:42 +0000 UTC (0+1 container statuses recorded) Jan 20 17:20:51.020: INFO: Container secret-volume-test ready: false, restart count 0 Jan 20 17:20:51.020: INFO: kube-flannel-ds-6vmgt started at 2023-01-20 17:07:53 +0000 UTC (2+1 container statuses recorded) Jan 20 17:20:51.020: INFO: Init container install-cni-plugin ready: true, restart count 1 Jan 20 17:20:51.020: INFO: Init container install-cni ready: true, restart count 0 Jan 20 17:20:51.020: INFO: Container kube-flannel ready: true, restart count 2 Jan 20 17:20:51.020: INFO: pod1 started at <nil> (0+0 container statuses recorded) Jan 20 17:20:51.020: INFO: test-rs-9jktl started at <nil> (0+0 container statuses recorded) Jan 20 17:20:51.020: INFO: service-proxy-toggled-zghmz started at 2023-01-20 17:15:04 +0000 UTC (0+1 container statuses recorded) Jan 20 17:20:51.020: INFO: Container service-proxy-toggled ready: true, restart count 1 Jan 20 17:20:51.020: INFO: hostexec-i-03af3dbca738ba168-48rhp started at 2023-01-20 17:17:12 +0000 UTC (0+1 container statuses recorded) Jan 20 17:20:51.020: INFO: Container agnhost-container ready: false, restart count 0 Jan 20 17:20:51.020: INFO: local-client started at 2023-01-20 17:17:47 +0000 UTC (0+1 container statuses recorded) Jan 20 17:20:51.020: INFO: Container local-client ready: true, restart count 0 Jan 20 17:20:51.020: INFO: ebs-csi-node-wmgfk started at 2023-01-20 17:18:21 +0000 UTC (0+3 container statuses recorded) Jan 20 17:20:51.020: INFO: Container ebs-plugin ready: true, restart count 0 Jan 20 17:20:51.020: INFO: Container liveness-probe ready: true, restart count 0 Jan 20 17:20:51.020: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 20 17:20:51.020: INFO: ss2-0 started at 2023-01-20 17:20:36 +0000 UTC (0+1 container statuses recorded) Jan 20 17:20:51.020: INFO: Container webserver ready: true, restart count 0 Jan 20 17:20:51.020: INFO: inline-volume-tester-npxd6 started at 2023-01-20 17:18:30 +0000 UTC (0+1 container statuses recorded) Jan 20 17:20:51.020: INFO: Container csi-volume-tester ready: true, restart count 0 Jan 20 17:20:51.020: INFO: affinity-clusterip-transition-6lshk started at <nil> (0+0 container statuses recorded) Jan 20 17:20:51.020: INFO: alpine-nnp-false-c6120892-79f2-4c8b-8592-02764d42673e started at <nil> (0+0 container statuses recorded) Jan 20 17:20:51.575: INFO: Latency metrics for node i-03af3dbca738ba168 Jan 20 17:20:51.575: INFO: Logging node info for node i-0460dbd3e490039bb Jan 20 17:20:51.660: INFO: Node Info: &Node{ObjectMeta:{i-0460dbd3e490039bb 3ed25acd-2f33-4687-a606-3d5a944590c8 15792 0 2023-01-20 17:07:53 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:us-east-1 failure-domain.beta.kubernetes.io/zone:us-east-1a kubernetes.io/arch:amd64 kubernetes.io/hostname:i-0460dbd3e490039bb kubernetes.io/os:linux node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:us-east-1a topology.hostpath.csi/node:i-0460dbd3e490039bb topology.kubernetes.io/region:us-east-1 topology.kubernetes.io/zone:us-east-1a] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-ephemeral-6516":"i-0460dbd3e490039bb","csi-mock-csi-mock-volumes-4209":"csi-mock-csi-mock-volumes-4209","ebs.csi.aws.com":"i-0460dbd3e490039bb"} flannel.alpha.coreos.com/backend-data:{"VNI":1,"VtepMAC":"0a:dc:21:c8:4e:3e"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:172.20.44.83 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{aws-cloud-controller-manager Update v1 2023-01-20 17:07:53 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:beta.kubernetes.io/instance-type":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {aws-cloud-controller-manager Update v1 2023-01-20 17:07:53 +0000 UTC FieldsV1 {"f:status":{"f:addresses":{"k:{\"type\":\"ExternalDNS\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"ExternalIP\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"Hostname\"}":{"f:address":{}},"k:{\"type\":\"InternalDNS\"}":{".":{},"f:address":{},"f:type":{}}}}} status} {kops-controller Update v1 2023-01-20 17:07:53 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:node-role.kubernetes.io/node":{}}}} } {kubelet Update v1 2023-01-20 17:07:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}}} } {kube-controller-manager Update v1 2023-01-20 17:10:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.3.0/24\"":{}}}} } {flanneld Update v1 2023-01-20 17:11:02 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2023-01-20 17:20:31 +0000 UTC FieldsV1 {"f:status":{"f:volumesAttached":{}}} status} {kubelet Update v1 2023-01-20 17:20:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{},"f:memory":{}},"f:capacity":{"f:memory":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{},"f:kernelVersion":{},"f:osImage":{}},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.3.0/24,DoNotUseExternalID:,ProviderID:aws:///us-east-1a/i-0460dbd3e490039bb,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{47441653760 0} {<nil>} 46329740Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4054806528 0} {<nil>} 3959772Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{42697488314 0} {<nil>} 42697488314 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3949948928 0} {<nil>} 3857372Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-20 17:11:02 +0000 UTC,LastTransitionTime:2023-01-20 17:11:02 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-20 17:20:30 +0000 UTC,LastTransitionTime:2023-01-20 17:07:32 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-20 17:20:30 +0000 UTC,LastTransitionTime:2023-01-20 17:07:32 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-20 17:20:30 +0000 UTC,LastTransitionTime:2023-01-20 17:07:32 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-20 17:20:30 +0000 UTC,LastTransitionTime:2023-01-20 17:10:57 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.44.83,},NodeAddress{Type:ExternalIP,Address:3.85.92.171,},NodeAddress{Type:InternalDNS,Address:i-0460dbd3e490039bb.ec2.internal,},NodeAddress{Type:Hostname,Address:i-0460dbd3e490039bb.ec2.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-3-85-92-171.compute-1.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec214ec8f7aec9bca6997e12c5d9fa17,SystemUUID:ec214ec8-f7ae-c9bc-a699-7e12c5d9fa17,BootID:6958a09a-b123-4522-ba50-97e69196d1e0,KernelVersion:5.15.86-flatcar,OSImage:Flatcar Container Linux by Kinvolk 3446.1.0 (Oklo),ContainerRuntimeVersion:containerd://1.6.10,KubeletVersion:v1.26.1,KubeProxyVersion:v1.26.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c registry.k8s.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.26.1],SizeBytes:67205316,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:f0c5de192d832e7c1daa6580d4a62e8fa6fc8eabc0917ae4cb7ed4d15e95b59e registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.1],SizeBytes:29725845,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[docker.io/rancher/mirrored-flannelcni-flannel@sha256:c9786f434d4663c924aeca1a2e479786d63df0d56c5d6bd62a64915f81d62ff0 docker.io/rancher/mirrored-flannelcni-flannel:v0.19.2],SizeBytes:20503771,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:8240918,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:5c99cf6a02adda929b10321dbf4ecfa00d87be9ba4fb456006237d530ab4baa1 registry.k8s.io/e2e-test-images/nginx:1.14-4],SizeBytes:6978614,},ContainerImage{Names:[docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin@sha256:28d3a6be9f450282bf42e4dad143d41da23e3d91f66f19c01ee7fd21fd17cb2b docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.0],SizeBytes:3821285,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:2110879,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[kubernetes.io/csi/csi-hostpath-ephemeral-6516^a6d3a37e-98e6-11ed-b9ef-a2c5fd84bcd1 kubernetes.io/csi/ebs.csi.aws.com^vol-0bfed00e78ca4b211],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/ebs.csi.aws.com^vol-0bfed00e78ca4b211,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-ephemeral-6516^a6d3a37e-98e6-11ed-b9ef-a2c5fd84bcd1,DevicePath:,},},Config:nil,},} Jan 20 17:20:51.660: INFO: Logging kubelet events for node i-0460dbd3e490039bb Jan 20 17:20:51.766: INFO: Logging pods the kubelet thinks is on node i-0460dbd3e490039bb Jan 20 17:20:51.871: INFO: affinity-clusterip-transition-v2cgr started at <nil> (0+0 container statuses recorded) Jan 20 17:20:51.871: INFO: kube-flannel-ds-q8m2b started at 2023-01-20 17:07:53 +0000 UTC (2+1 container statuses recorded) Jan 20 17:20:51.871: INFO: Init container install-cni-plugin ready: true, restart count 1 Jan 20 17:20:51.871: INFO: Init container install-cni ready: true, restart count 0 Jan 20 17:20:51.871: INFO: Container kube-flannel ready: true, restart count 2 Jan 20 17:20:51.871: INFO: netserver-1 started at 2023-01-20 17:15:26 +0000 UTC (0+1 container statuses recorded) Jan 20 17:20:51.871: INFO: Container webserver ready: true, restart count 0 Jan 20 17:20:51.871: INFO: ss2-1 started at 2023-01-20 17:20:37 +0000 UTC (0+1 container statuses recorded) Jan 20 17:20:51.871: INFO: Container webserver ready: true, restart count 0 Jan 20 17:20:51.871: INFO: ebs-csi-node-kmj84 started at 2023-01-20 17:07:53 +0000 UTC (0+3 container statuses recorded) Jan 20 17:20:51.871: INFO: Container ebs-plugin ready: true, restart count 1 Jan 20 17:20:51.871: INFO: Container liveness-probe ready: true, restart count 1 Jan 20 17:20:51.871: INFO: Container node-driver-registrar ready: true, restart count 1 Jan 20 17:20:51.871: INFO: downwardapi-volume-65e507d7-2728-4f27-b145-837b0a794a2f started at 2023-01-20 17:15:24 +0000 UTC (0+1 container statuses recorded) Jan 20 17:20:51.871: INFO: Container client-container ready: false, restart count 0 Jan 20 17:20:51.871: INFO: pod-subpath-test-inlinevolume-kz65 started at 2023-01-20 17:20:46 +0000 UTC (2+2 container statuses recorded) Jan 20 17:20:51.871: INFO: Init container init-volume-inlinevolume-kz65 ready: false, restart count 0 Jan 20 17:20:51.871: INFO: Init container test-init-subpath-inlinevolume-kz65 ready: false, restart count 0 Jan 20 17:20:51.871: INFO: Container test-container-subpath-inlinevolume-kz65 ready: false, restart count 0 Jan 20 17:20:51.871: INFO: Container test-container-volume-inlinevolume-kz65 ready: false, restart count 0 Jan 20 17:20:51.871: INFO: test-grpc-46eabcb2-0c4a-4520-810d-ba498e0fcbea started at 2023-01-20 17:15:20 +0000 UTC (0+1 container statuses recorded) Jan 20 17:20:51.871: INFO: Container etcd ready: true, restart count 0 Jan 20 17:20:51.871: INFO: test-pod-1 started at 2023-01-20 17:15:25 +0000 UTC (0+1 container statuses recorded) Jan 20 17:20:51.871: INFO: Container token-test ready: true, restart count 0 Jan 20 17:20:51.871: INFO: startup-04b7934a-c3e8-415c-ba2f-32e3d709e2f1 started at 2023-01-20 17:14:57 +0000 UTC (0+1 container statuses recorded) Jan 20 17:20:51.871: INFO: Container busybox ready: false, restart count 0 Jan 20 17:20:51.871: INFO: service-proxy-disabled-hc668 started at 2023-01-20 17:14:55 +0000 UTC (0+1 container statuses recorded) Jan 20 17:20:51.871: INFO: Container service-proxy-disabled ready: true, restart count 0 Jan 20 17:20:51.871: INFO: test-ss-0 started at 2023-01-20 17:19:56 +0000 UTC (0+1 container statuses recorded) Jan 20 17:20:51.871: INFO: Container webserver ready: true, restart count 0 Jan 20 17:20:51.871: INFO: test-webserver-de11aec3-5b9d-4460-b199-b75d4012849c started at 2023-01-20 17:20:35 +0000 UTC (0+1 container statuses recorded) Jan 20 17:20:51.871: INFO: Container test-webserver ready: false, restart count 0 Jan 20 17:20:51.871: INFO: hostexec-i-0460dbd3e490039bb-gl7xm started at 2023-01-20 17:15:20 +0000 UTC (0+1 container statuses recorded) Jan 20 17:20:51.871: INFO: Container agnhost-container ready: true, restart count 0 Jan 20 17:20:51.871: INFO: pvc-volume-tester-x5qmk started at 2023-01-20 17:20:34 +0000 UTC (0+1 container statuses recorded) Jan 20 17:20:51.871: INFO: Container volume-tester ready: true, restart count 0 Jan 20 17:20:51.871: INFO: kube-proxy-i-0460dbd3e490039bb started at 2023-01-20 17:07:33 +0000 UTC (0+1 container statuses recorded) Jan 20 17:20:51.871: INFO: Container kube-proxy ready: true, restart count 1 Jan 20 17:20:51.871: INFO: csi-hostpathplugin-0 started at 2023-01-20 17:19:42 +0000 UTC (0+7 container statuses recorded) Jan 20 17:20:51.871: INFO: Container csi-attacher ready: true, restart count 0 Jan 20 17:20:51.871: INFO: Container csi-provisioner ready: true, restart count 0 Jan 20 17:20:51.871: INFO: Container csi-resizer ready: true, restart count 0 Jan 20 17:20:51.871: INFO: Container csi-snapshotter ready: true, restart count 0 Jan 20 17:20:51.871: INFO: Container hostpath ready: true, restart count 0 Jan 20 17:20:51.871: INFO: Container liveness-probe ready: true, restart count 0 Jan 20 17:20:51.871: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 20 17:20:51.871: INFO: exceed-active-deadline-nr657 started at 2023-01-20 17:18:59 +0000 UTC (0+1 container statuses recorded) Jan 20 17:20:51.871: INFO: Container c ready: false, restart count 0 Jan 20 17:20:51.871: INFO: pod-d9b2c311-b86f-4135-a026-635f052e5073 started at 2023-01-20 17:15:13 +0000 UTC (0+1 container statuses recorded) Jan 20 17:20:51.871: INFO: Container write-pod ready: true, restart count 0 Jan 20 17:20:51.871: INFO: verify-service-down-host-exec-pod started at 2023-01-20 17:15:22 +0000 UTC (0+1 container statuses recorded) Jan 20 17:20:51.871: INFO: Container agnhost-container ready: true, restart count 0 Jan 20 17:20:51.871: INFO: simpletest.rc-jrszk started at 2023-01-20 17:15:26 +0000 UTC (0+1 container statuses recorded) Jan 20 17:20:51.871: INFO: Container nginx ready: true, restart count 0 Jan 20 17:20:51.871: INFO: service-proxy-toggled-bvmzm started at 2023-01-20 17:15:04 +0000 UTC (0+1 container statuses recorded) Jan 20 17:20:51.871: INFO: Container service-proxy-toggled ready: true, restart count 0 Jan 20 17:20:51.871: INFO: busybox-readonly-fscac6b863-d493-44c3-af92-2541b7e24dda started at 2023-01-20 17:20:31 +0000 UTC (0+1 container statuses recorded) Jan 20 17:20:51.871: INFO: Container busybox-readonly-fscac6b863-d493-44c3-af92-2541b7e24dda ready: true, restart count 0 Jan 20 17:20:51.871: INFO: csi-mockplugin-0 started at 2023-01-20 17:20:05 +0000 UTC (0+4 container statuses recorded) Jan 20 17:20:51.871: INFO: Container busybox ready: true, restart count 0 Jan 20 17:20:51.871: INFO: Container csi-provisioner ready: true, restart count 0 Jan 20 17:20:51.871: INFO: Container driver-registrar ready: true, restart count 0 Jan 20 17:20:51.871: INFO: Container mock ready: true, restart count 0 Jan 20 17:20:51.871: INFO: simpletest.deployment-7cf4fd9d8f-qvcdg started at 2023-01-20 17:20:46 +0000 UTC (0+1 container statuses recorded) Jan 20 17:20:51.871: INFO: Container nginx ready: false, restart count 0 Jan 20 17:20:51.871: INFO: inline-volume-tester-v4x6v started at 2023-01-20 17:19:53 +0000 UTC (0+1 container statuses recorded) Jan 20 17:20:51.871: INFO: Container csi-volume-tester ready: true, restart count 0 Jan 20 17:20:52.437: INFO: Latency metrics for node i-0460dbd3e490039bb Jan 20 17:20:52.437: INFO: Logging node info for node i-048afc59cd0c5fa4a Jan 20 17:20:52.510: INFO: Node Info: &Node{ObjectMeta:{i-048afc59cd0c5fa4a 906bdaca-cfdb-4619-98d1-2751663efe41 16112 0 2023-01-20 17:07:51 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:us-east-1 failure-domain.beta.kubernetes.io/zone:us-east-1a kubernetes.io/arch:amd64 kubernetes.io/hostname:i-048afc59cd0c5fa4a kubernetes.io/os:linux node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:us-east-1a topology.hostpath.csi/node:i-048afc59cd0c5fa4a topology.kubernetes.io/region:us-east-1 topology.kubernetes.io/zone:us-east-1a] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-volume-expand-8612":"i-048afc59cd0c5fa4a","csi-hostpath-volumemode-4314":"i-048afc59cd0c5fa4a","csi-mock-csi-mock-volumes-3661":"i-048afc59cd0c5fa4a","ebs.csi.aws.com":"i-048afc59cd0c5fa4a"} flannel.alpha.coreos.com/backend-data:{"VNI":1,"VtepMAC":"52:68:72:e8:79:3f"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:172.20.41.86 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{aws-cloud-controller-manager Update v1 2023-01-20 17:07:51 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:beta.kubernetes.io/instance-type":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {aws-cloud-controller-manager Update v1 2023-01-20 17:07:51 +0000 UTC FieldsV1 {"f:status":{"f:addresses":{"k:{\"type\":\"ExternalDNS\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"ExternalIP\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"Hostname\"}":{"f:address":{}},"k:{\"type\":\"InternalDNS\"}":{".":{},"f:address":{},"f:type":{}}}}} status} {kops-controller Update v1 2023-01-20 17:07:51 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:node-role.kubernetes.io/node":{}}}} } {kubelet Update v1 2023-01-20 17:07:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}}} } {kube-controller-manager Update v1 2023-01-20 17:18:07 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.1.0/24\"":{}}}} } {flanneld Update v1 2023-01-20 17:18:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2023-01-20 17:20:48 +0000 UTC FieldsV1 {"f:status":{"f:volumesAttached":{}}} status} {kubelet Update v1 2023-01-20 17:20:49 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:allocatable":{"f:memory":{}},"f:capacity":{"f:memory":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{},"f:kernelVersion":{},"f:osImage":{}},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.1.0/24,DoNotUseExternalID:,ProviderID:aws:///us-east-1a/i-048afc59cd0c5fa4a,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{47441653760 0} {<nil>} 46329740Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4054806528 0} {<nil>} 3959772Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{42697488314 0} {<nil>} 42697488314 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3949948928 0} {<nil>} 3857372Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-20 17:18:12 +0000 UTC,LastTransitionTime:2023-01-20 17:18:12 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-20 17:20:49 +0000 UTC,LastTransitionTime:2023-01-20 17:07:30 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-20 17:20:49 +0000 UTC,LastTransitionTime:2023-01-20 17:07:30 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-20 17:20:49 +0000 UTC,LastTransitionTime:2023-01-20 17:07:30 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-20 17:20:49 +0000 UTC,LastTransitionTime:2023-01-20 17:18:06 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.41.86,},NodeAddress{Type:ExternalIP,Address:34.201.135.194,},NodeAddress{Type:InternalDNS,Address:i-048afc59cd0c5fa4a.ec2.internal,},NodeAddress{Type:Hostname,Address:i-048afc59cd0c5fa4a.ec2.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-34-201-135-194.compute-1.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec2886bb32c49932d355813f2015452a,SystemUUID:ec2886bb-32c4-9932-d355-813f2015452a,BootID:c3c6217a-92a9-4cf1-a92f-5cf2a5908c35,KernelVersion:5.15.86-flatcar,OSImage:Flatcar Container Linux by Kinvolk 3446.1.0 (Oklo),ContainerRuntimeVersion:containerd://1.6.10,KubeletVersion:v1.26.1,KubeProxyVersion:v1.26.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.26.1],SizeBytes:67205316,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:f0c5de192d832e7c1daa6580d4a62e8fa6fc8eabc0917ae4cb7ed4d15e95b59e registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.1],SizeBytes:29725845,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[docker.io/rancher/mirrored-flannelcni-flannel@sha256:c9786f434d4663c924aeca1a2e479786d63df0d56c5d6bd62a64915f81d62ff0 docker.io/rancher/mirrored-flannelcni-flannel:v0.19.2],SizeBytes:20503771,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:68d396900aeaa072c1f27289485fdac29834045a6f3ffe369bf389d830ef572d registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.6],SizeBytes:20293261,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:8240918,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:5c99cf6a02adda929b10321dbf4ecfa00d87be9ba4fb456006237d530ab4baa1 registry.k8s.io/e2e-test-images/nginx:1.14-4],SizeBytes:6978614,},ContainerImage{Names:[docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin@sha256:28d3a6be9f450282bf42e4dad143d41da23e3d91f66f19c01ee7fd21fd17cb2b docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.0],SizeBytes:3821285,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[kubernetes.io/csi/csi-hostpath-volume-expand-8612^c7536b19-98e6-11ed-b315-9a35c1137196 kubernetes.io/csi/ebs.csi.aws.com^vol-02c3c5599ae572bad kubernetes.io/csi/ebs.csi.aws.com^vol-03eb8ec2c9b202513],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/ebs.csi.aws.com^vol-03eb8ec2c9b202513,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/ebs.csi.aws.com^vol-02c3c5599ae572bad,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-volume-expand-8612^c7536b19-98e6-11ed-b315-9a35c1137196,DevicePath:,},},Config:nil,},} Jan 20 17:20:52.510: INFO: Logging kubelet events for node i-048afc59cd0c5fa4a Jan 20 17:20:52.613: INFO: Logging pods the kubelet thinks is on node i-048afc59cd0c5fa4a Jan 20 17:20:52.725: INFO: kube-proxy-i-048afc59cd0c5fa4a started at 2023-01-20 17:07:31 +0000 UTC (0+1 container statuses recorded) Jan 20 17:20:52.725: INFO: Container kube-proxy ready: true, restart count 1 Jan 20 17:20:52.725: INFO: coredns-559769c974-mkzlp started at 2023-01-20 17:07:54 +0000 UTC (0+1 container statuses recorded) Jan 20 17:20:52.725: INFO: Container coredns ready: true, restart count 1 Jan 20 17:20:52.725: INFO: netserver-2 started at 2023-01-20 17:15:26 +0000 UTC (0+1 container statuses recorded) Jan 20 17:20:52.725: INFO: Container webserver ready: false, restart count 0 Jan 20 17:20:52.725: INFO: csi-hostpathplugin-0 started at 2023-01-20 17:20:43 +0000 UTC (0+7 container statuses recorded) Jan 20 17:20:52.725: INFO: Container csi-attacher ready: true, restart count 0 Jan 20 17:20:52.725: INFO: Container csi-provisioner ready: true, restart count 0 Jan 20 17:20:52.725: INFO: Container csi-resizer ready: true, restart count 0 Jan 20 17:20:52.725: INFO: Container csi-snapshotter ready: true, restart count 0 Jan 20 17:20:52.725: INFO: Container hostpath ready: true, restart count 0 Jan 20 17:20:52.725: INFO: Container liveness-probe ready: true, restart count 0 Jan 20 17:20:52.725: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 20 17:20:52.725: INFO: csi-hostpathplugin-0 started at 2023-01-20 17:18:06 +0000 UTC (0+7 container statuses recorded) Jan 20 17:20:52.725: INFO: Container csi-attacher ready: true, restart count 0 Jan 20 17:20:52.725: INFO: Container csi-provisioner ready: true, restart count 0 Jan 20 17:20:52.725: INFO: Container csi-resizer ready: true, restart count 0 Jan 20 17:20:52.725: INFO: Container csi-snapshotter ready: true, restart count 0 Jan 20 17:20:52.725: INFO: Container hostpath ready: true, restart count 0 Jan 20 17:20:52.725: INFO: Container liveness-probe ready: true, restart count 0 Jan 20 17:20:52.725: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 20 17:20:52.725: INFO: inline-volume-tester-sklp8 started at 2023-01-20 17:20:38 +0000 UTC (0+1 container statuses recorded) Jan 20 17:20:52.725: INFO: Container csi-volume-tester ready: true, restart count 0 Jan 20 17:20:52.725: INFO: pod-815393ec-d1b3-4f9a-baf6-39b4fa221095 started at 2023-01-20 17:20:47 +0000 UTC (0+1 container statuses recorded) Jan 20 17:20:52.725: INFO: Container write-pod ready: true, restart count 0 Jan 20 17:20:52.725: INFO: kube-flannel-ds-nlnn2 started at 2023-01-20 17:18:06 +0000 UTC (2+1 container statuses recorded) Jan 20 17:20:52.725: INFO: Init container install-cni-plugin ready: true, restart count 0 Jan 20 17:20:52.725: INFO: Init container install-cni ready: true, restart count 0 Jan 20 17:20:52.725: INFO: Container kube-flannel ready: true, restart count 0 Jan 20 17:20:52.725: INFO: startup-script started at 2023-01-20 17:14:41 +0000 UTC (0+1 container statuses recorded) Jan 20 17:20:52.725: INFO: Container startup-script ready: false, restart count 0 Jan 20 17:20:52.725: INFO: csi-mockplugin-attacher-0 started at 2023-01-20 17:18:06 +0000 UTC (0+1 container statuses recorded) Jan 20 17:20:52.725: INFO: Container csi-attacher ready: true, restart count 0 Jan 20 17:20:52.725: INFO: csi-mockplugin-0 started at 2023-01-20 17:18:06 +0000 UTC (0+3 container statuses recorded) Jan 20 17:20:52.725: INFO: Container csi-provisioner ready: true, restart count 0 Jan 20 17:20:52.725: INFO: Container driver-registrar ready: true, restart count 0 Jan 20 17:20:52.725: INFO: Container mock ready: true, restart count 0 Jan 20 17:20:52.725: INFO: csi-mockplugin-resizer-0 started at 2023-01-20 17:18:06 +0000 UTC (0+1 container statuses recorded) Jan 20 17:20:52.725: INFO: Container csi-resizer ready: true, restart count 0 Jan 20 17:20:52.725: INFO: coredns-autoscaler-7cb5c5b969-kxr22 started at 2023-01-20 17:07:54 +0000 UTC (0+1 container statuses recorded) Jan 20 17:20:52.725: INFO: Container autoscaler ready: false, restart count 0 Jan 20 17:20:52.725: INFO: ebs-csi-node-dkvln started at 2023-01-20 17:18:06 +0000 UTC (0+3 container statuses recorded) Jan 20 17:20:52.725: INFO: Container ebs-plugin ready: true, restart count 0 Jan 20 17:20:52.725: INFO: Container liveness-probe ready: true, restart count 0 Jan 20 17:20:52.725: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 20 17:20:53.120: INFO: Latency metrics for node i-048afc59cd0c5fa4a Jan 20 17:20:53.120: INFO: Logging node info for node i-0f775d321e19704c3 Jan 20 17:20:53.209: INFO: Node Info: &Node{ObjectMeta:{i-0f775d321e19704c3 19607256-f185-404f-84dd-0198c716bca7 16221 0 2023-01-20 17:07:53 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:us-east-1 failure-domain.beta.kubernetes.io/zone:us-east-1a kubernetes.io/arch:amd64 kubernetes.io/hostname:i-0f775d321e19704c3 kubernetes.io/os:linux node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:us-east-1a topology.hostpath.csi/node:i-0f775d321e19704c3 topology.kubernetes.io/region:us-east-1 topology.kubernetes.io/zone:us-east-1a] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-ephemeral-2555":"i-0f775d321e19704c3","ebs.csi.aws.com":"i-0f775d321e19704c3"} flannel.alpha.coreos.com/backend-data:{"VNI":1,"VtepMAC":"72:43:d6:40:e8:77"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:172.20.55.61 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-20 17:07:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}}} } {aws-cloud-controller-manager Update v1 2023-01-20 17:07:54 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:beta.kubernetes.io/instance-type":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {aws-cloud-controller-manager Update v1 2023-01-20 17:07:54 +0000 UTC FieldsV1 {"f:status":{"f:addresses":{"k:{\"type\":\"ExternalDNS\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"ExternalIP\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"Hostname\"}":{"f:address":{}},"k:{\"type\":\"InternalDNS\"}":{".":{},"f:address":{},"f:type":{}}}}} status} {kops-controller Update v1 2023-01-20 17:07:54 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:node-role.kubernetes.io/node":{}}}} } {kube-controller-manager Update v1 2023-01-20 17:09:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.4.0/24\"":{}}}} } {flanneld Update v1 2023-01-20 17:09:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2023-01-20 17:20:51 +0000 UTC FieldsV1 {"f:status":{"f:volumesAttached":{}}} status} {kubelet Update v1 2023-01-20 17:20:52 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{},"f:memory":{}},"f:capacity":{"f:memory":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{},"f:kernelVersion":{},"f:osImage":{}},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.4.0/24,DoNotUseExternalID:,ProviderID:aws:///us-east-1a/i-0f775d321e19704c3,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{47441653760 0} {<nil>} 46329740Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4054814720 0} {<nil>} 3959780Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{42697488314 0} {<nil>} 42697488314 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3949957120 0} {<nil>} 3857380Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-20 17:09:35 +0000 UTC,LastTransitionTime:2023-01-20 17:09:35 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-20 17:20:52 +0000 UTC,LastTransitionTime:2023-01-20 17:07:33 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-20 17:20:52 +0000 UTC,LastTransitionTime:2023-01-20 17:07:33 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-20 17:20:52 +0000 UTC,LastTransitionTime:2023-01-20 17:07:33 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-20 17:20:52 +0000 UTC,LastTransitionTime:2023-01-20 17:09:30 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.55.61,},NodeAddress{Type:ExternalIP,Address:3.93.201.229,},NodeAddress{Type:InternalDNS,Address:i-0f775d321e19704c3.ec2.internal,},NodeAddress{Type:Hostname,Address:i-0f775d321e19704c3.ec2.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-3-93-201-229.compute-1.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec2a4be20ed59f70fa8678b6d03004b4,SystemUUID:ec2a4be2-0ed5-9f70-fa86-78b6d03004b4,BootID:d3100caa-b833-4d03-b5c0-4cb4a8b87060,KernelVersion:5.15.86-flatcar,OSImage:Flatcar Container Linux by Kinvolk 3446.1.0 (Oklo),ContainerRuntimeVersion:containerd://1.6.10,KubeletVersion:v1.26.1,KubeProxyVersion:v1.26.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.26.1],SizeBytes:67205316,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:f0c5de192d832e7c1daa6580d4a62e8fa6fc8eabc0917ae4cb7ed4d15e95b59e registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.1],SizeBytes:29725845,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[docker.io/rancher/mirrored-flannelcni-flannel@sha256:c9786f434d4663c924aeca1a2e479786d63df0d56c5d6bd62a64915f81d62ff0 docker.io/rancher/mirrored-flannelcni-flannel:v0.19.2],SizeBytes:20503771,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:68d396900aeaa072c1f27289485fdac29834045a6f3ffe369bf389d830ef572d registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.6],SizeBytes:20293261,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:8240918,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:db048754ae68ae337d8fa96494c96d2a1204c3320f5dcf7e8e71085adec85da6 registry.k8s.io/e2e-test-images/nginx:1.15-4],SizeBytes:7000082,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:5c99cf6a02adda929b10321dbf4ecfa00d87be9ba4fb456006237d530ab4baa1 registry.k8s.io/e2e-test-images/nginx:1.14-4],SizeBytes:6978614,},ContainerImage{Names:[docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin@sha256:28d3a6be9f450282bf42e4dad143d41da23e3d91f66f19c01ee7fd21fd17cb2b docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.0],SizeBytes:3821285,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac registry.k8s.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[kubernetes.io/csi/csi-hostpath-ephemeral-2555^c94f5c01-98e6-11ed-af4a-fe86566bd700 kubernetes.io/csi/csi-mock-csi-mock-volumes-5223^c8620916-98e5-11ed-99ff-464f19649f33],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-mock-csi-mock-volumes-5223^c8620916-98e5-11ed-99ff-464f19649f33,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-ephemeral-2555^c94f5c01-98e6-11ed-af4a-fe86566bd700,DevicePath:,},},Config:nil,},} Jan 20 17:20:53.210: INFO: Logging kubelet events for node i-0f775d321e19704c3 Jan 20 17:20:53.315: INFO: Logging pods the kubelet thinks is on node i-0f775d321e19704c3 Jan 20 17:20:53.422: INFO: simpletest.rc-9xd2k started at 2023-01-20 17:15:26 +0000 UTC (0+1 container statuses recorded) Jan 20 17:20:53.422: INFO: Container nginx ready: true, restart count 0 Jan 20 17:20:53.422: INFO: ebs-csi-node-74dsh started at 2023-01-20 17:07:54 +0000 UTC (0+3 container statuses recorded) Jan 20 17:20:53.422: INFO: Container ebs-plugin ready: true, restart count 1 Jan 20 17:20:53.422: INFO: Container liveness-probe ready: true, restart count 1 Jan 20 17:20:53.422: INFO: Container node-driver-registrar ready: true, restart count 1 Jan 20 17:20:53.422: INFO: exceed-active-deadline-5zhv4 started at 2023-01-20 17:18:59 +0000 UTC (0+1 container statuses recorded) Jan 20 17:20:53.422: INFO: Container c ready: false, restart count 0 Jan 20 17:20:53.422: INFO: service-proxy-disabled-jg82r started at 2023-01-20 17:17:52 +0000 UTC (0+1 container statuses recorded) Jan 20 17:20:53.422: INFO: Container service-proxy-disabled ready: true, restart count 0 Jan 20 17:20:53.422: INFO: kube-flannel-ds-d9rm4 started at 2023-01-20 17:07:54 +0000 UTC (2+1 container statuses recorded) Jan 20 17:20:53.422: INFO: Init container install-cni-plugin ready: true, restart count 1 Jan 20 17:20:53.422: INFO: Init container install-cni ready: true, restart count 0 Jan 20 17:20:53.422: INFO: Container kube-flannel ready: true, restart count 2 Jan 20 17:20:53.422: INFO: service-proxy-toggled-8j48l started at 2023-01-20 17:15:04 +0000 UTC (0+1 container statuses recorded) Jan 20 17:20:53.422: INFO: Container service-proxy-toggled ready: true, restart count 0 Jan 20 17:20:53.422: INFO: inline-volume-tester-5rjwh started at 2023-01-20 17:20:50 +0000 UTC (0+1 container statuses recorded) Jan 20 17:20:53.422: INFO: Container csi-volume-tester ready: false, restart count 0 Jan 20 17:20:53.422: INFO: service-proxy-disabled-xwb98 started at 2023-01-20 17:14:55 +0000 UTC (0+1 container statuses recorded) Jan 20 17:20:53.422: INFO: Container service-proxy-disabled ready: true, restart count 0 Jan 20 17:20:53.422: INFO: hostexec-i-0f775d321e19704c3-58mj2 started at 2023-01-20 17:20:45 +0000 UTC (0+1 container statuses recorded) Jan 20 17:20:53.422: INFO: Container agnhost-container ready: true, restart count 0 Jan 20 17:20:53.422: INFO: ss2-2 started at 2023-01-20 17:20:50 +0000 UTC (0+1 container statuses recorded) Jan 20 17:20:53.422: INFO: Container webserver ready: false, restart count 0 Jan 20 17:20:53.422: INFO: test-rollover-controller-sbpsw started at 2023-01-20 17:20:34 +0000 UTC (0+1 container statuses recorded) Jan 20 17:20:53.422: INFO: Container httpd ready: true, restart count 0 Jan 20 17:20:53.422: INFO: pvc-volume-tester-v7khp started at 2023-01-20 17:13:41 +0000 UTC (0+1 container statuses recorded) Jan 20 17:20:53.422: INFO: Container volume-tester ready: false, restart count 0 Jan 20 17:20:53.422: INFO: test-pod-3 started at 2023-01-20 17:15:25 +0000 UTC (0+1 container statuses recorded) Jan 20 17:20:53.422: INFO: Container token-test ready: true, restart count 0 Jan 20 17:20:53.422: INFO: coredns-autoscaler-7cb5c5b969-zvbqv started at 2023-01-20 17:17:40 +0000 UTC (0+1 container statuses recorded) Jan 20 17:20:53.422: INFO: Container autoscaler ready: true, restart count 0 Jan 20 17:20:53.422: INFO: pod-ephm-test-configmap-4dsq started at 2023-01-20 17:19:34 +0000 UTC (0+1 container statuses recorded) Jan 20 17:20:53.422: INFO: Container test-container-subpath-configmap-4dsq ready: false, restart count 0 Jan 20 17:20:53.422: INFO: kube-proxy-i-0f775d321e19704c3 started at 2023-01-20 17:07:34 +0000 UTC (0+1 container statuses recorded) Jan 20 17:20:53.422: INFO: Container kube-proxy ready: true, restart count 1 Jan 20 17:20:53.422: INFO: test-pod-2 started at 2023-01-20 17:15:25 +0000 UTC (0+1 container statuses recorded) Jan 20 17:20:53.422: INFO: Container token-test ready: true, restart count 0 Jan 20 17:20:53.422: INFO: netserver-3 started at 2023-01-20 17:15:26 +0000 UTC (0+1 container statuses recorded) Jan 20 17:20:53.422: INFO: Container webserver ready: true, restart count 0 Jan 20 17:20:53.422: INFO: affinity-clusterip-transition-kgkp8 started at 2023-01-20 17:20:48 +0000 UTC (0+1 container statuses recorded) Jan 20 17:20:53.422: INFO: Container affinity-clusterip-transition ready: true, restart count 0 Jan 20 17:20:53.422: INFO: csi-hostpathplugin-0 started at 2023-01-20 17:20:47 +0000 UTC (0+7 container statuses recorded) Jan 20 17:20:53.422: INFO: Container csi-attacher ready: true, restart count 0 Jan 20 17:20:53.422: INFO: Container csi-provisioner ready: true, restart count 0 Jan 20 17:20:53.422: INFO: Container csi-resizer ready: true, restart count 0 Jan 20 17:20:53.422: INFO: Container csi-snapshotter ready: true, restart count 0 Jan 20 17:20:53.422: INFO: Container hostpath ready: true, restart count 0 Jan 20 17:20:53.422: INFO: Container liveness-probe ready: true, restart count 0 Jan 20 17:20:53.422: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 20 17:20:53.804: INFO: Latency metrics for node i-0f775d321e19704c3 �[1mSTEP:�[0m Waiting for namespaces [volumemode-4314] to vanish �[38;5;243m01/20/23 17:20:53.862�[0m �[1mSTEP:�[0m uninstalling csi csi-hostpath driver �[38;5;243m01/20/23 17:21:05.96�[0m Jan 20 17:21:05.960: INFO: deleting *v1.ServiceAccount: volumemode-4314-8917/csi-attacher Jan 20 17:21:06.064: INFO: deleting *v1.ClusterRole: external-attacher-runner-volumemode-4314 Jan 20 17:21:06.161: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-volumemode-4314 Jan 20 17:21:06.262: INFO: deleting *v1.Role: volumemode-4314-8917/external-attacher-cfg-volumemode-4314 Jan 20 17:21:06.364: INFO: deleting *v1.RoleBinding: volumemode-4314-8917/csi-attacher-role-cfg Jan 20 17:21:06.462: INFO: deleting *v1.ServiceAccount: volumemode-4314-8917/csi-provisioner Jan 20 17:21:06.562: INFO: deleting *v1.ClusterRole: external-provisioner-runner-volumemode-4314 Jan 20 17:21:06.662: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-volumemode-4314 Jan 20 17:21:06.762: INFO: deleting *v1.Role: volumemode-4314-8917/external-provisioner-cfg-volumemode-4314 Jan 20 17:21:06.861: INFO: deleting *v1.RoleBinding: volumemode-4314-8917/csi-provisioner-role-cfg Jan 20 17:21:06.963: INFO: deleting *v1.ServiceAccount: volumemode-4314-8917/csi-snapshotter Jan 20 17:21:07.061: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-volumemode-4314 Jan 20 17:21:07.162: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-volumemode-4314 Jan 20 17:21:07.265: INFO: deleting *v1.Role: volumemode-4314-8917/external-snapshotter-leaderelection-volumemode-4314 Jan 20 17:21:07.361: INFO: deleting *v1.RoleBinding: volumemode-4314-8917/external-snapshotter-leaderelection Jan 20 17:21:07.470: INFO: deleting *v1.ServiceAccount: volumemode-4314-8917/csi-external-health-monitor-controller Jan 20 17:21:07.561: INFO: deleting *v1.ClusterRole: external-health-monitor-controller-runner-volumemode-4314 Jan 20 17:21:07.662: INFO: deleting *v1.ClusterRoleBinding: csi-external-health-monitor-controller-role-volumemode-4314 Jan 20 17:21:07.763: INFO: deleting *v1.Role: volumemode-4314-8917/external-health-monitor-controller-cfg-volumemode-4314 Jan 20 17:21:07.861: INFO: deleting *v1.RoleBinding: volumemode-4314-8917/csi-external-health-monitor-controller-role-cfg Jan 20 17:21:07.963: INFO: deleting *v1.ServiceAccount: volumemode-4314-8917/csi-resizer Jan 20 17:21:08.064: INFO: deleting *v1.ClusterRole: external-resizer-runner-volumemode-4314 Jan 20 17:21:08.161: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-volumemode-4314 Jan 20 17:21:08.261: INFO: deleting *v1.Role: volumemode-4314-8917/external-resizer-cfg-volumemode-4314 Jan 20 17:21:08.361: INFO: deleting *v1.RoleBinding: volumemode-4314-8917/csi-resizer-role-cfg Jan 20 17:21:08.461: INFO: deleting *v1.CSIDriver: csi-hostpath-volumemode-4314 Jan 20 17:21:08.565: INFO: deleting *v1.ServiceAccount: volumemode-4314-8917/csi-hostpathplugin-sa Jan 20 17:21:08.661: INFO: deleting *v1.ClusterRoleBinding: csi-hostpathplugin-attacher-cluster-role-volumemode-4314 Jan 20 17:21:08.761: INFO: deleting *v1.ClusterRoleBinding: csi-hostpathplugin-health-monitor-controller-cluster-role-volumemode-4314 Jan 20 17:21:08.861: INFO: deleting *v1.ClusterRoleBinding: csi-hostpathplugin-provisioner-cluster-role-volumemode-4314 Jan 20 17:21:08.965: INFO: deleting *v1.ClusterRoleBinding: csi-hostpathplugin-resizer-cluster-role-volumemode-4314 Jan 20 17:21:09.061: INFO: deleting *v1.ClusterRoleBinding: csi-hostpathplugin-snapshotter-cluster-role-volumemode-4314 Jan 20 17:21:09.161: INFO: deleting *v1.RoleBinding: volumemode-4314-8917/csi-hostpathplugin-attacher-role Jan 20 17:21:09.263: INFO: deleting *v1.RoleBinding: volumemode-4314-8917/csi-hostpathplugin-health-monitor-controller-role Jan 20 17:21:09.361: INFO: deleting *v1.RoleBinding: volumemode-4314-8917/csi-hostpathplugin-provisioner-role Jan 20 17:21:09.461: INFO: deleting *v1.RoleBinding: volumemode-4314-8917/csi-hostpathplugin-resizer-role Jan 20 17:21:09.561: INFO: deleting *v1.RoleBinding: volumemode-4314-8917/csi-hostpathplugin-snapshotter-role Jan 20 17:21:09.665: INFO: deleting *v1.StatefulSet: volumemode-4314-8917/csi-hostpathplugin Jan 20 17:21:09.768: INFO: deleting *v1.ClusterRoleBinding: psp-csi-hostpath-role-volumemode-4314 �[1mSTEP:�[0m deleting the driver namespace: volumemode-4314-8917 �[38;5;243m01/20/23 17:21:09.877�[0m �[1mSTEP:�[0m Collecting events from namespace "volumemode-4314-8917". �[38;5;243m01/20/23 17:21:09.878�[0m �[1mSTEP:�[0m Found 58 events. �[38;5;243m01/20/23 17:21:09.961�[0m Jan 20 17:21:09.961: INFO: At 2023-01-20 17:15:15 +0000 UTC - event for csi-hostpathplugin: {statefulset-controller } SuccessfulCreate: create Pod csi-hostpathplugin-0 in StatefulSet csi-hostpathplugin successful Jan 20 17:21:09.961: INFO: At 2023-01-20 17:15:15 +0000 UTC - event for csi-hostpathplugin-0: {default-scheduler } Scheduled: Successfully assigned volumemode-4314-8917/csi-hostpathplugin-0 to i-048afc59cd0c5fa4a Jan 20 17:21:09.961: INFO: At 2023-01-20 17:15:16 +0000 UTC - event for csi-hostpathplugin-0: {kubelet i-048afc59cd0c5fa4a} Pulled: Container image "registry.k8s.io/sig-storage/hostpathplugin:v1.9.0" already present on machine Jan 20 17:21:09.961: INFO: At 2023-01-20 17:15:16 +0000 UTC - event for csi-hostpathplugin-0: {kubelet i-048afc59cd0c5fa4a} Created: Created container hostpath Jan 20 17:21:09.961: INFO: At 2023-01-20 17:15:16 +0000 UTC - event for csi-hostpathplugin-0: {kubelet i-048afc59cd0c5fa4a} Started: Started container hostpath Jan 20 17:21:09.961: INFO: At 2023-01-20 17:15:16 +0000 UTC - event for csi-hostpathplugin-0: {kubelet i-048afc59cd0c5fa4a} Pulled: Container image "registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1" already present on machine Jan 20 17:21:09.961: INFO: At 2023-01-20 17:15:16 +0000 UTC - event for csi-hostpathplugin-0: {kubelet i-048afc59cd0c5fa4a} Created: Created container node-driver-registrar Jan 20 17:21:09.961: INFO: At 2023-01-20 17:15:17 +0000 UTC - event for csi-hostpathplugin-0: {kubelet i-048afc59cd0c5fa4a} Started: Started container node-driver-registrar Jan 20 17:21:09.961: INFO: At 2023-01-20 17:15:17 +0000 UTC - event for csi-hostpathplugin-0: {kubelet i-048afc59cd0c5fa4a} Pulling: Pulling image "registry.k8s.io/sig-storage/livenessprobe:v2.7.0" Jan 20 17:21:09.961: INFO: At 2023-01-20 17:15:17 +0000 UTC - event for csi-hostpathplugin-0: {kubelet i-048afc59cd0c5fa4a} Pulled: Successfully pulled image "registry.k8s.io/sig-storage/livenessprobe:v2.7.0" in 810.817932ms (810.834153ms including waiting) Jan 20 17:21:09.961: INFO: At 2023-01-20 17:15:17 +0000 UTC - event for csi-hostpathplugin-0: {kubelet i-048afc59cd0c5fa4a} Created: Created container liveness-probe Jan 20 17:21:09.961: INFO: At 2023-01-20 17:15:17 +0000 UTC - event for csi-hostpathplugin-0: {kubelet i-048afc59cd0c5fa4a} Started: Started container liveness-probe Jan 20 17:21:09.961: INFO: At 2023-01-20 17:15:17 +0000 UTC - event for csi-hostpathplugin-0: {kubelet i-048afc59cd0c5fa4a} Pulled: Container image "registry.k8s.io/sig-storage/csi-attacher:v4.0.0" already present on machine Jan 20 17:21:09.961: INFO: At 2023-01-20 17:15:17 +0000 UTC - event for csi-hostpathplugin-0: {kubelet i-048afc59cd0c5fa4a} Created: Created container csi-attacher Jan 20 17:21:09.961: INFO: At 2023-01-20 17:15:18 +0000 UTC - event for csi-hostpathplugin-0: {kubelet i-048afc59cd0c5fa4a} Started: Started container csi-attacher Jan 20 17:21:09.961: INFO: At 2023-01-20 17:15:18 +0000 UTC - event for csi-hostpathplugin-0: {kubelet i-048afc59cd0c5fa4a} Pulled: Container image "registry.k8s.io/sig-storage/csi-provisioner:v3.3.0" already present on machine Jan 20 17:21:09.961: INFO: At 2023-01-20 17:15:18 +0000 UTC - event for csi-hostpathplugin-0: {kubelet i-048afc59cd0c5fa4a} Created: Created container csi-provisioner Jan 20 17:21:09.961: INFO: At 2023-01-20 17:15:18 +0000 UTC - event for csi-hostpathplugin-0: {kubelet i-048afc59cd0c5fa4a} Started: Started container csi-provisioner Jan 20 17:21:09.961: INFO: At 2023-01-20 17:15:18 +0000 UTC - event for csi-hostpathplugin-0: {kubelet i-048afc59cd0c5fa4a} Pulled: Container image "registry.k8s.io/sig-storage/csi-resizer:v1.6.0" already present on machine Jan 20 17:21:09.961: INFO: At 2023-01-20 17:15:18 +0000 UTC - event for csi-hostpathplugin-0: {kubelet i-048afc59cd0c5fa4a} Created: Created container csi-resizer Jan 20 17:21:09.961: INFO: At 2023-01-20 17:15:18 +0000 UTC - event for csi-hostpathplugin-0: {kubelet i-048afc59cd0c5fa4a} Started: Started container csi-resizer Jan 20 17:21:09.961: INFO: At 2023-01-20 17:15:18 +0000 UTC - event for csi-hostpathplugin-0: {kubelet i-048afc59cd0c5fa4a} Pulling: Pulling image "registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0" Jan 20 17:21:09.961: INFO: At 2023-01-20 17:15:19 +0000 UTC - event for csi-hostpathplugin-0: {kubelet i-048afc59cd0c5fa4a} Pulled: Successfully pulled image "registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0" in 1.372677072s (1.37268206s including waiting) Jan 20 17:21:09.961: INFO: At 2023-01-20 17:15:19 +0000 UTC - event for csi-hostpathplugin-0: {kubelet i-048afc59cd0c5fa4a} Created: Created container csi-snapshotter Jan 20 17:21:09.961: INFO: At 2023-01-20 17:15:19 +0000 UTC - event for csi-hostpathplugin-0: {kubelet i-048afc59cd0c5fa4a} Started: Started container csi-snapshotter Jan 20 17:21:09.961: INFO: At 2023-01-20 17:17:19 +0000 UTC - event for csi-hostpathplugin-0: {kubelet i-048afc59cd0c5fa4a} Killing: Stopping container hostpath Jan 20 17:21:09.961: INFO: At 2023-01-20 17:17:19 +0000 UTC - event for csi-hostpathplugin-0: {kubelet i-048afc59cd0c5fa4a} Killing: Stopping container csi-snapshotter Jan 20 17:21:09.961: INFO: At 2023-01-20 17:17:22 +0000 UTC - event for csi-hostpathplugin: {statefulset-controller } RecreatingFailedPod: StatefulSet volumemode-4314-8917/csi-hostpathplugin is recreating failed Pod csi-hostpathplugin-0 Jan 20 17:21:09.961: INFO: At 2023-01-20 17:17:22 +0000 UTC - event for csi-hostpathplugin: {statefulset-controller } SuccessfulDelete: delete Pod csi-hostpathplugin-0 in StatefulSet csi-hostpathplugin successful Jan 20 17:21:09.961: INFO: At 2023-01-20 17:17:22 +0000 UTC - event for csi-hostpathplugin: {statefulset-controller } SuccessfulCreate: create Pod csi-hostpathplugin-0 in StatefulSet csi-hostpathplugin successful Jan 20 17:21:09.961: INFO: At 2023-01-20 17:17:22 +0000 UTC - event for csi-hostpathplugin-0: {default-scheduler } FailedScheduling: 0/5 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/5 nodes are available: 1 Preemption is not helpful for scheduling, 4 No preemption victims found for incoming pod.. Jan 20 17:21:09.961: INFO: At 2023-01-20 17:18:07 +0000 UTC - event for csi-hostpathplugin-0: {default-scheduler } Scheduled: Successfully assigned volumemode-4314-8917/csi-hostpathplugin-0 to i-048afc59cd0c5fa4a Jan 20 17:21:09.961: INFO: At 2023-01-20 17:18:08 +0000 UTC - event for csi-hostpathplugin-0: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod volumemode-4314-8917/csi-hostpathplugin-0 Jan 20 17:21:09.961: INFO: At 2023-01-20 17:18:09 +0000 UTC - event for csi-hostpathplugin-0: {kubelet i-048afc59cd0c5fa4a} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "f52acf0bd47b2c72a9471f1263ebc35ab6c3ca96379c73966672c6b287290c77": plugin type="flannel" failed (add): open /run/flannel/subnet.env: no such file or directory Jan 20 17:21:09.961: INFO: At 2023-01-20 17:18:24 +0000 UTC - event for csi-hostpathplugin-0: {kubelet i-048afc59cd0c5fa4a} Pulled: Container image "registry.k8s.io/sig-storage/hostpathplugin:v1.9.0" already present on machine Jan 20 17:21:09.961: INFO: At 2023-01-20 17:18:24 +0000 UTC - event for csi-hostpathplugin-0: {kubelet i-048afc59cd0c5fa4a} Created: Created container hostpath Jan 20 17:21:09.961: INFO: At 2023-01-20 17:18:24 +0000 UTC - event for csi-hostpathplugin-0: {kubelet i-048afc59cd0c5fa4a} Started: Started container hostpath Jan 20 17:21:09.961: INFO: At 2023-01-20 17:18:24 +0000 UTC - event for csi-hostpathplugin-0: {kubelet i-048afc59cd0c5fa4a} Pulled: Container image "registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1" already present on machine Jan 20 17:21:09.961: INFO: At 2023-01-20 17:18:24 +0000 UTC - event for csi-hostpathplugin-0: {kubelet i-048afc59cd0c5fa4a} Created: Created container node-driver-registrar Jan 20 17:21:09.961: INFO: At 2023-01-20 17:18:24 +0000 UTC - event for csi-hostpathplugin-0: {kubelet i-048afc59cd0c5fa4a} Started: Started container node-driver-registrar Jan 20 17:21:09.961: INFO: At 2023-01-20 17:18:24 +0000 UTC - event for csi-hostpathplugin-0: {kubelet i-048afc59cd0c5fa4a} Pulled: Container image "registry.k8s.io/sig-storage/livenessprobe:v2.7.0" already present on machine Jan 20 17:21:09.961: INFO: At 2023-01-20 17:18:24 +0000 UTC - event for csi-hostpathplugin-0: {kubelet i-048afc59cd0c5fa4a} Created: Created container liveness-probe Jan 20 17:21:09.961: INFO: At 2023-01-20 17:18:24 +0000 UTC - event for csi-hostpathplugin-0: {kubelet i-048afc59cd0c5fa4a} Started: Started container liveness-probe Jan 20 17:21:09.961: INFO: At 2023-01-20 17:18:24 +0000 UTC - event for csi-hostpathplugin-0: {kubelet i-048afc59cd0c5fa4a} Pulled: Container image "registry.k8s.io/sig-storage/csi-attacher:v4.0.0" already present on machine Jan 20 17:21:09.962: INFO: At 2023-01-20 17:18:24 +0000 UTC - event for csi-hostpathplugin-0: {kubelet i-048afc59cd0c5fa4a} Created: Created container csi-attacher Jan 20 17:21:09.962: INFO: At 2023-01-20 17:18:24 +0000 UTC - event for csi-hostpathplugin-0: {kubelet i-048afc59cd0c5fa4a} Started: Started container csi-attacher Jan 20 17:21:09.962: INFO: At 2023-01-20 17:18:24 +0000 UTC - event for csi-hostpathplugin-0: {kubelet i-048afc59cd0c5fa4a} Pulled: Container image "registry.k8s.io/sig-storage/csi-provisioner:v3.3.0" already present on machine Jan 20 17:21:09.962: INFO: At 2023-01-20 17:18:24 +0000 UTC - event for csi-hostpathplugin-0: {kubelet i-048afc59cd0c5fa4a} Created: Created container csi-provisioner Jan 20 17:21:09.962: INFO: At 2023-01-20 17:18:24 +0000 UTC - event for csi-hostpathplugin-0: {kubelet i-048afc59cd0c5fa4a} Started: Started container csi-provisioner Jan 20 17:21:09.962: INFO: At 2023-01-20 17:18:24 +0000 UTC - event for csi-hostpathplugin-0: {kubelet i-048afc59cd0c5fa4a} Pulled: Container image "registry.k8s.io/sig-storage/csi-resizer:v1.6.0" already present on machine Jan 20 17:21:09.962: INFO: At 2023-01-20 17:18:24 +0000 UTC - event for csi-hostpathplugin-0: {kubelet i-048afc59cd0c5fa4a} Created: Created container csi-resizer Jan 20 17:21:09.962: INFO: At 2023-01-20 17:18:25 +0000 UTC - event for csi-hostpathplugin-0: {kubelet i-048afc59cd0c5fa4a} Started: Started container csi-resizer Jan 20 17:21:09.962: INFO: At 2023-01-20 17:18:25 +0000 UTC - event for csi-hostpathplugin-0: {kubelet i-048afc59cd0c5fa4a} Pulled: Container image "registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0" already present on machine Jan 20 17:21:09.962: INFO: At 2023-01-20 17:18:25 +0000 UTC - event for csi-hostpathplugin-0: {kubelet i-048afc59cd0c5fa4a} Created: Created container csi-snapshotter Jan 20 17:21:09.962: INFO: At 2023-01-20 17:18:25 +0000 UTC - event for csi-hostpathplugin-0: {kubelet i-048afc59cd0c5fa4a} Started: Started container csi-snapshotter Jan 20 17:21:09.962: INFO: At 2023-01-20 17:21:09 +0000 UTC - event for csi-hostpathplugin-0: {kubelet i-048afc59cd0c5fa4a} Killing: Stopping container csi-provisioner Jan 20 17:21:09.962: INFO: At 2023-01-20 17:21:09 +0000 UTC - event for csi-hostpathplugin-0: {kubelet i-048afc59cd0c5fa4a} Killing: Stopping container csi-snapshotter Jan 20 17:21:09.962: INFO: At 2023-01-20 17:21:09 +0000 UTC - event for csi-hostpathplugin-0: {kubelet i-048afc59cd0c5fa4a} Killing: Stopping container csi-resizer Jan 20 17:21:10.010: INFO: POD NODE PHASE GRACE CONDITIONS Jan 20 17:21:10.010: INFO: csi-hostpathplugin-0 i-048afc59cd0c5fa4a Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-20 17:18:06 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-01-20 17:18:26 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-20 17:18:26 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-20 17:18:07 +0000 UTC }] Jan 20 17:21:10.010: INFO: Jan 20 17:21:10.411: INFO: Logging node info for node i-02cae73514916eb60 Jan 20 17:21:10.460: INFO: Node Info: &Node{ObjectMeta:{i-02cae73514916eb60 6d0a8063-275e-4cb5-a7e1-ecf07fb2d810 6920 0 2023-01-20 17:06:32 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:c5.large beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:us-east-1 failure-domain.beta.kubernetes.io/zone:us-east-1a kops.k8s.io/kops-controller-pki: kubernetes.io/arch:amd64 kubernetes.io/hostname:i-02cae73514916eb60 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node.kubernetes.io/exclude-from-external-load-balancers: node.kubernetes.io/instance-type:c5.large topology.ebs.csi.aws.com/zone:us-east-1a topology.kubernetes.io/region:us-east-1 topology.kubernetes.io/zone:us-east-1a] map[csi.volume.kubernetes.io/nodeid:{"ebs.csi.aws.com":"i-02cae73514916eb60"} flannel.alpha.coreos.com/backend-data:{"VNI":1,"VtepMAC":"e6:28:1d:38:9c:ba"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:172.20.51.65 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-20 17:06:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}}} } {kube-controller-manager Update v1 2023-01-20 17:07:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.0.0/24\"":{}}}} } {protokube Update v1 2023-01-20 17:07:01 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kops.k8s.io/kops-controller-pki":{},"f:node-role.kubernetes.io/control-plane":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}} } {aws-cloud-controller-manager Update v1 2023-01-20 17:07:06 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:beta.kubernetes.io/instance-type":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:taints":{}}} } {aws-cloud-controller-manager Update v1 2023-01-20 17:07:06 +0000 UTC FieldsV1 {"f:status":{"f:addresses":{"k:{\"type\":\"ExternalDNS\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"ExternalIP\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"Hostname\"}":{"f:address":{}},"k:{\"type\":\"InternalDNS\"}":{".":{},"f:address":{},"f:type":{}}}}} status} {kubelet Update v1 2023-01-20 17:16:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{}}},"f:status":{"f:allocatable":{"f:memory":{}},"f:capacity":{"f:memory":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{},"f:kernelVersion":{},"f:osImage":{}}}} status} {flanneld Update v1 2023-01-20 17:16:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status}]},Spec:NodeSpec{PodCIDR:100.96.0.0/24,DoNotUseExternalID:,ProviderID:aws:///us-east-1a/i-02cae73514916eb60,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/control-plane,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[100.96.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{47441653760 0} {<nil>} 46329740Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3895427072 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{42697488314 0} {<nil>} 42697488314 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3790569472 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-20 17:16:23 +0000 UTC,LastTransitionTime:2023-01-20 17:16:23 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-20 17:16:15 +0000 UTC,LastTransitionTime:2023-01-20 17:06:30 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-20 17:16:15 +0000 UTC,LastTransitionTime:2023-01-20 17:06:30 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-20 17:16:15 +0000 UTC,LastTransitionTime:2023-01-20 17:06:30 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-20 17:16:15 +0000 UTC,LastTransitionTime:2023-01-20 17:16:15 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.51.65,},NodeAddress{Type:ExternalIP,Address:100.26.139.144,},NodeAddress{Type:InternalDNS,Address:i-02cae73514916eb60.ec2.internal,},NodeAddress{Type:Hostname,Address:i-02cae73514916eb60.ec2.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-100-26-139-144.compute-1.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec26604dd0f376256ae652d6e661c235,SystemUUID:ec26604d-d0f3-7625-6ae6-52d6e661c235,BootID:a089a900-b2da-4d1d-8de1-3fdf21e97305,KernelVersion:5.15.86-flatcar,OSImage:Flatcar Container Linux by Kinvolk 3446.1.0 (Oklo),ContainerRuntimeVersion:containerd://1.6.10,KubeletVersion:v1.26.1,KubeProxyVersion:v1.26.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/etcdadm/etcd-manager@sha256:66a453db625abb268f4b3bbefc5a34a171d81e6e8796cecca54cfd71775c77c4 registry.k8s.io/etcdadm/etcd-manager:v3.0.20221209],SizeBytes:231502799,},ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.26.1],SizeBytes:135178704,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.26.1],SizeBytes:124995897,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.26.1],SizeBytes:67205316,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.26.1],SizeBytes:57661752,},ContainerImage{Names:[registry.k8s.io/kops/kops-controller:1.26.0-beta.2],SizeBytes:43191763,},ContainerImage{Names:[registry.k8s.io/kops/dns-controller:1.26.0-beta.2],SizeBytes:42821714,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:f0c5de192d832e7c1daa6580d4a62e8fa6fc8eabc0917ae4cb7ed4d15e95b59e registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.1],SizeBytes:29725845,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:122bfb8c1edabb3c0edd63f06523e6940d958d19b3957dc7b1d6f81e9f1f6119 registry.k8s.io/sig-storage/csi-provisioner:v3.1.0],SizeBytes:23345856,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:9ebbf9f023e7b41ccee3d52afe39a89e3ddacdbb69269d583abfc25847cfd9e4 registry.k8s.io/sig-storage/csi-resizer:v1.4.0],SizeBytes:22381475,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:8b9c313c05f54fb04f8d430896f5f5904b6cb157df261501b29adc04d2b2dc7b registry.k8s.io/sig-storage/csi-attacher:v3.4.0],SizeBytes:22085298,},ContainerImage{Names:[docker.io/rancher/mirrored-flannelcni-flannel@sha256:c9786f434d4663c924aeca1a2e479786d63df0d56c5d6bd62a64915f81d62ff0 docker.io/rancher/mirrored-flannelcni-flannel:v0.19.2],SizeBytes:20503771,},ContainerImage{Names:[registry.k8s.io/provider-aws/cloud-controller-manager@sha256:fdeb61e3e42ecd9cca868d550ebdb88dd6341d9e91fcfa9a37e227dab2ad22cb registry.k8s.io/provider-aws/cloud-controller-manager:v1.26.0],SizeBytes:20154862,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:8240918,},ContainerImage{Names:[registry.k8s.io/kops/kube-apiserver-healthcheck:1.26.0-beta.2],SizeBytes:4965793,},ContainerImage{Names:[docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin@sha256:28d3a6be9f450282bf42e4dad143d41da23e3d91f66f19c01ee7fd21fd17cb2b docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.0],SizeBytes:3821285,},ContainerImage{Names:[registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 20 17:21:10.460: INFO: Logging kubelet events for node i-02cae73514916eb60 Jan 20 17:21:10.513: INFO: Logging pods the kubelet thinks is on node i-02cae73514916eb60 Jan 20 17:21:10.565: INFO: dns-controller-74d4646d88-p7zxr started at 2023-01-20 17:07:01 +0000 UTC (0+1 container statuses recorded) Jan 20 17:21:10.565: INFO: Container dns-controller ready: true, restart count 1 Jan 20 17:21:10.565: INFO: ebs-csi-controller-c9fc69cf5-kn566 started at 2023-01-20 17:07:01 +0000 UTC (0+5 container statuses recorded) Jan 20 17:21:10.565: INFO: Container csi-attacher ready: true, restart count 2 Jan 20 17:21:10.565: INFO: Container csi-provisioner ready: true, restart count 2 Jan 20 17:21:10.565: INFO: Container csi-resizer ready: true, restart count 1 Jan 20 17:21:10.565: INFO: Container ebs-plugin ready: true, restart count 1 Jan 20 17:21:10.565: INFO: Container liveness-probe ready: true, restart count 1 Jan 20 17:21:10.565: INFO: aws-cloud-controller-manager-2qgs4 started at 2023-01-20 17:07:01 +0000 UTC (0+1 container statuses recorded) Jan 20 17:21:10.565: INFO: Container aws-cloud-controller-manager ready: true, restart count 2 Jan 20 17:21:10.565: INFO: etcd-manager-main-i-02cae73514916eb60 started at 2023-01-20 17:16:08 +0000 UTC (0+1 container statuses recorded) Jan 20 17:21:10.565: INFO: Container etcd-manager ready: true, restart count 1 Jan 20 17:21:10.565: INFO: kube-apiserver-i-02cae73514916eb60 started at 2023-01-20 17:16:08 +0000 UTC (0+2 container statuses recorded) Jan 20 17:21:10.565: INFO: Container healthcheck ready: true, restart count 1 Jan 20 17:21:10.565: INFO: Container kube-apiserver ready: true, restart count 2 Jan 20 17:21:10.565: INFO: kube-controller-manager-i-02cae73514916eb60 started at 2023-01-20 17:06:00 +0000 UTC (0+1 container statuses recorded) Jan 20 17:21:10.565: INFO: Container kube-controller-manager ready: true, restart count 4 Jan 20 17:21:10.565: INFO: kube-proxy-i-02cae73514916eb60 started at 2023-01-20 17:16:08 +0000 UTC (0+1 container statuses recorded) Jan 20 17:21:10.565: INFO: Container kube-proxy ready: true, restart count 1 Jan 20 17:21:10.565: INFO: kops-controller-mqtlq started at 2023-01-20 17:07:01 +0000 UTC (0+1 container statuses recorded) Jan 20 17:21:10.565: INFO: Container kops-controller ready: true, restart count 2 Jan 20 17:21:10.565: INFO: etcd-manager-events-i-02cae73514916eb60 started at 2023-01-20 17:16:08 +0000 UTC (0+1 container statuses recorded) Jan 20 17:21:10.565: INFO: Container etcd-manager ready: true, restart count 1 Jan 20 17:21:10.565: INFO: kube-scheduler-i-02cae73514916eb60 started at 2023-01-20 17:16:08 +0000 UTC (0+1 container statuses recorded) Jan 20 17:21:10.565: INFO: Container kube-scheduler ready: true, restart count 1 Jan 20 17:21:10.565: INFO: ebs-csi-node-lfls8 started at 2023-01-20 17:06:58 +0000 UTC (0+3 container statuses recorded) Jan 20 17:21:10.565: INFO: Container ebs-plugin ready: true, restart count 1 Jan 20 17:21:10.565: INFO: Container liveness-probe ready: true, restart count 1 Jan 20 17:21:10.565: INFO: Container node-driver-registrar ready: true, restart count 1 Jan 20 17:21:10.565: INFO: kube-flannel-ds-5nkqq started at 2023-01-20 17:06:58 +0000 UTC (2+1 container statuses recorded) Jan 20 17:21:10.565: INFO: Init container install-cni-plugin ready: true, restart count 1 Jan 20 17:21:10.565: INFO: Init container install-cni ready: true, restart count 0 Jan 20 17:21:10.565: INFO: Container kube-flannel ready: true, restart count 1 Jan 20 17:21:10.780: INFO: Latency metrics for node i-02cae73514916eb60 Jan 20 17:21:10.780: INFO: Logging node info for node i-03af3dbca738ba168 Jan 20 17:21:10.812: INFO: Node Info: &Node{ObjectMeta:{i-03af3dbca738ba168 f2b83166-36e9-4e14-8fe3-7e4da5f5a758 16638 0 2023-01-20 17:07:52 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:us-east-1 failure-domain.beta.kubernetes.io/zone:us-east-1a kubernetes.io/arch:amd64 kubernetes.io/hostname:i-03af3dbca738ba168 kubernetes.io/os:linux node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:us-east-1a topology.hostpath.csi/node:i-03af3dbca738ba168 topology.kubernetes.io/region:us-east-1 topology.kubernetes.io/zone:us-east-1a] map[csi.volume.kubernetes.io/nodeid:{"ebs.csi.aws.com":"i-03af3dbca738ba168"} flannel.alpha.coreos.com/backend-data:{"VNI":1,"VtepMAC":"ea:9a:cb:28:29:d0"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:172.20.58.114 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{aws-cloud-controller-manager Update v1 2023-01-20 17:07:52 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:beta.kubernetes.io/instance-type":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {aws-cloud-controller-manager Update v1 2023-01-20 17:07:52 +0000 UTC FieldsV1 {"f:status":{"f:addresses":{"k:{\"type\":\"ExternalDNS\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"ExternalIP\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"Hostname\"}":{"f:address":{}},"k:{\"type\":\"InternalDNS\"}":{".":{},"f:address":{},"f:type":{}}}}} status} {kops-controller Update v1 2023-01-20 17:07:52 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:node-role.kubernetes.io/node":{}}}} } {kubelet Update v1 2023-01-20 17:07:52 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}}} } {kube-controller-manager Update v1 2023-01-20 17:18:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.2.0/24\"":{}}}} } {flanneld Update v1 2023-01-20 17:18:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2023-01-20 17:20:59 +0000 UTC FieldsV1 {"f:status":{"f:volumesAttached":{}}} status} {kubelet Update v1 2023-01-20 17:21:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:allocatable":{"f:memory":{}},"f:capacity":{"f:memory":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{},"f:kernelVersion":{},"f:osImage":{}},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.2.0/24,DoNotUseExternalID:,ProviderID:aws:///us-east-1a/i-03af3dbca738ba168,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{47441653760 0} {<nil>} 46329740Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4054806528 0} {<nil>} 3959772Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{42697488314 0} {<nil>} 42697488314 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3949948928 0} {<nil>} 3857372Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-20 17:18:26 +0000 UTC,LastTransitionTime:2023-01-20 17:18:26 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-20 17:21:04 +0000 UTC,LastTransitionTime:2023-01-20 17:07:42 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-20 17:21:04 +0000 UTC,LastTransitionTime:2023-01-20 17:07:42 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-20 17:21:04 +0000 UTC,LastTransitionTime:2023-01-20 17:07:42 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-20 17:21:04 +0000 UTC,LastTransitionTime:2023-01-20 17:18:21 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.58.114,},NodeAddress{Type:ExternalIP,Address:54.92.220.56,},NodeAddress{Type:InternalDNS,Address:i-03af3dbca738ba168.ec2.internal,},NodeAddress{Type:Hostname,Address:i-03af3dbca738ba168.ec2.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-54-92-220-56.compute-1.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec2a474c9a9b98f9bdaf7a97ffdf305e,SystemUUID:ec2a474c-9a9b-98f9-bdaf-7a97ffdf305e,BootID:67cb1ab9-8c0f-4a0e-aa27-d7cde3225458,KernelVersion:5.15.86-flatcar,OSImage:Flatcar Container Linux by Kinvolk 3446.1.0 (Oklo),ContainerRuntimeVersion:containerd://1.6.10,KubeletVersion:v1.26.1,KubeProxyVersion:v1.26.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c registry.k8s.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.26.1],SizeBytes:67205316,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:f0c5de192d832e7c1daa6580d4a62e8fa6fc8eabc0917ae4cb7ed4d15e95b59e registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.1],SizeBytes:29725845,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/sample-apiserver@sha256:8d70890151aa5d096f331cb9da1b9cd5be0412b7363fe67b5c3befdcaa2a28d0 registry.k8s.io/e2e-test-images/sample-apiserver:1.17.7],SizeBytes:25667066,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[docker.io/rancher/mirrored-flannelcni-flannel@sha256:c9786f434d4663c924aeca1a2e479786d63df0d56c5d6bd62a64915f81d62ff0 docker.io/rancher/mirrored-flannelcni-flannel:v0.19.2],SizeBytes:20503771,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf registry.k8s.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:18651485,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:8240918,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:5c99cf6a02adda929b10321dbf4ecfa00d87be9ba4fb456006237d530ab4baa1 registry.k8s.io/e2e-test-images/nginx:1.14-4],SizeBytes:6978614,},ContainerImage{Names:[docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin@sha256:28d3a6be9f450282bf42e4dad143d41da23e3d91f66f19c01ee7fd21fd17cb2b docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.0],SizeBytes:3821285,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[kubernetes.io/csi/ebs.csi.aws.com^vol-04fe4e7cf6c9b4ad6 kubernetes.io/csi/ebs.csi.aws.com^vol-0e32dc9872409b22a],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/ebs.csi.aws.com^vol-0e32dc9872409b22a,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/ebs.csi.aws.com^vol-04fe4e7cf6c9b4ad6,DevicePath:,},},Config:nil,},} Jan 20 17:21:10.812: INFO: Logging kubelet events for node i-03af3dbca738ba168 Jan 20 17:21:10.845: INFO: Logging pods the kubelet thinks is on node i-03af3dbca738ba168 Jan 20 17:21:10.882: INFO: coredns-559769c974-6f8t8 started at 2023-01-20 17:08:35 +0000 UTC (0+1 container statuses recorded) Jan 20 17:21:10.882: INFO: Container coredns ready: true, restart count 1 Jan 20 17:21:10.882: INFO: test-rollover-deployment-6c6df9974f-x6f4f started at 2023-01-20 17:20:47 +0000 UTC (0+1 container statuses recorded) Jan 20 17:21:10.882: INFO: Container agnhost ready: true, restart count 0 Jan 20 17:21:10.882: INFO: boom-server started at 2023-01-20 17:14:35 +0000 UTC (0+1 container statuses recorded) Jan 20 17:21:10.882: INFO: Container boom-server ready: false, restart count 0 Jan 20 17:21:10.882: INFO: hostexec-i-03af3dbca738ba168-pq5zr started at 2023-01-20 17:21:00 +0000 UTC (0+1 container statuses recorded) Jan 20 17:21:10.882: INFO: Container agnhost-container ready: true, restart count 0 Jan 20 17:21:10.882: INFO: hostexec-i-03af3dbca738ba168-q6k7b started at 2023-01-20 17:14:55 +0000 UTC (0+1 container statuses recorded) Jan 20 17:21:10.882: INFO: Container agnhost-container ready: false, restart count 0 Jan 20 17:21:10.882: INFO: netserver-0 started at 2023-01-20 17:15:26 +0000 UTC (0+1 container statuses recorded) Jan 20 17:21:10.882: INFO: Container webserver ready: false, restart count 0 Jan 20 17:21:10.882: INFO: inline-volume-tester-4jcxc started at 2023-01-20 17:20:56 +0000 UTC (0+1 container statuses recorded) Jan 20 17:21:10.882: INFO: Container csi-volume-tester ready: false, restart count 0 Jan 20 17:21:10.882: INFO: service-proxy-disabled-x6wst started at 2023-01-20 17:14:55 +0000 UTC (0+1 container statuses recorded) Jan 20 17:21:10.882: INFO: Container service-proxy-disabled ready: false, restart count 0 Jan 20 17:21:10.882: INFO: ss2-0 started at 2023-01-20 17:20:56 +0000 UTC (0+1 container statuses recorded) Jan 20 17:21:10.882: INFO: Container webserver ready: true, restart count 0 Jan 20 17:21:10.882: INFO: kube-flannel-ds-6vmgt started at 2023-01-20 17:07:53 +0000 UTC (2+1 container statuses recorded) Jan 20 17:21:10.882: INFO: Init container install-cni-plugin ready: true, restart count 1 Jan 20 17:21:10.882: INFO: Init container install-cni ready: true, restart count 0 Jan 20 17:21:10.882: INFO: Container kube-flannel ready: true, restart count 2 Jan 20 17:21:10.882: INFO: pod1 started at 2023-01-20 17:20:49 +0000 UTC (0+1 container statuses recorded) Jan 20 17:21:10.882: INFO: Container container1 ready: true, restart count 0 Jan 20 17:21:10.882: INFO: test-rs-9jktl started at 2023-01-20 17:20:50 +0000 UTC (0+1 container statuses recorded) Jan 20 17:21:10.882: INFO: Container httpd ready: true, restart count 0 Jan 20 17:21:10.882: INFO: rs-d4xll started at <nil> (0+0 container statuses recorded) Jan 20 17:21:10.882: INFO: service-proxy-toggled-zghmz started at 2023-01-20 17:15:04 +0000 UTC (0+1 container statuses recorded) Jan 20 17:21:10.882: INFO: Container service-proxy-toggled ready: true, restart count 1 Jan 20 17:21:10.882: INFO: update-demo-nautilus-gxzlv started at <nil> (0+0 container statuses recorded) Jan 20 17:21:10.882: INFO: hostexec-i-03af3dbca738ba168-48rhp started at 2023-01-20 17:17:12 +0000 UTC (0+1 container statuses recorded) Jan 20 17:21:10.882: INFO: Container agnhost-container ready: false, restart count 0 Jan 20 17:21:10.882: INFO: rs-2fsdp started at <nil> (0+0 container statuses recorded) Jan 20 17:21:10.882: INFO: local-client started at 2023-01-20 17:17:47 +0000 UTC (0+1 container statuses recorded) Jan 20 17:21:10.882: INFO: Container local-client ready: true, restart count 0 Jan 20 17:21:10.882: INFO: ebs-csi-node-wmgfk started at 2023-01-20 17:18:21 +0000 UTC (0+3 container statuses recorded) Jan 20 17:21:10.882: INFO: Container ebs-plugin ready: true, restart count 0 Jan 20 17:21:10.882: INFO: Container liveness-probe ready: true, restart count 0 Jan 20 17:21:10.882: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 20 17:21:10.882: INFO: inline-volume-tester-npxd6 started at 2023-01-20 17:18:30 +0000 UTC (0+1 container statuses recorded) Jan 20 17:21:10.882: INFO: Container csi-volume-tester ready: true, restart count 0 Jan 20 17:21:10.882: INFO: rs-ldnb6 started at <nil> (0+0 container statuses recorded) Jan 20 17:21:10.882: INFO: kube-proxy-i-03af3dbca738ba168 started at 2023-01-20 17:07:42 +0000 UTC (0+1 container statuses recorded) Jan 20 17:21:10.882: INFO: Container kube-proxy ready: true, restart count 1 Jan 20 17:21:10.882: INFO: pod2 started at 2023-01-20 17:20:50 +0000 UTC (0+1 container statuses recorded) Jan 20 17:21:10.882: INFO: Container container1 ready: true, restart count 0 Jan 20 17:21:11.095: INFO: Latency metrics for node i-03af3dbca738ba168 Jan 20 17:21:11.095: INFO: Logging node info for node i-0460dbd3e490039bb Jan 20 17:21:11.124: INFO: Node Info: &Node{ObjectMeta:{i-0460dbd3e490039bb 3ed25acd-2f33-4687-a606-3d5a944590c8 16841 0 2023-01-20 17:07:53 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:us-east-1 failure-domain.beta.kubernetes.io/zone:us-east-1a kubernetes.io/arch:amd64 kubernetes.io/hostname:i-0460dbd3e490039bb kubernetes.io/os:linux node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:us-east-1a topology.hostpath.csi/node:i-0460dbd3e490039bb topology.kubernetes.io/region:us-east-1 topology.kubernetes.io/zone:us-east-1a] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-ephemeral-6516":"i-0460dbd3e490039bb","ebs.csi.aws.com":"i-0460dbd3e490039bb"} flannel.alpha.coreos.com/backend-data:{"VNI":1,"VtepMAC":"0a:dc:21:c8:4e:3e"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:172.20.44.83 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{aws-cloud-controller-manager Update v1 2023-01-20 17:07:53 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:beta.kubernetes.io/instance-type":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {aws-cloud-controller-manager Update v1 2023-01-20 17:07:53 +0000 UTC FieldsV1 {"f:status":{"f:addresses":{"k:{\"type\":\"ExternalDNS\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"ExternalIP\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"Hostname\"}":{"f:address":{}},"k:{\"type\":\"InternalDNS\"}":{".":{},"f:address":{},"f:type":{}}}}} status} {kops-controller Update v1 2023-01-20 17:07:53 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:node-role.kubernetes.io/node":{}}}} } {kubelet Update v1 2023-01-20 17:07:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}}} } {kube-controller-manager Update v1 2023-01-20 17:10:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.3.0/24\"":{}}}} } {flanneld Update v1 2023-01-20 17:11:02 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2023-01-20 17:20:31 +0000 UTC FieldsV1 {"f:status":{"f:volumesAttached":{}}} status} {kubelet Update v1 2023-01-20 17:21:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{},"f:memory":{}},"f:capacity":{"f:memory":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{},"f:kernelVersion":{},"f:osImage":{}},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.3.0/24,DoNotUseExternalID:,ProviderID:aws:///us-east-1a/i-0460dbd3e490039bb,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{47441653760 0} {<nil>} 46329740Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4054806528 0} {<nil>} 3959772Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{42697488314 0} {<nil>} 42697488314 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3949948928 0} {<nil>} 3857372Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-20 17:11:02 +0000 UTC,LastTransitionTime:2023-01-20 17:11:02 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-20 17:20:30 +0000 UTC,LastTransitionTime:2023-01-20 17:07:32 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-20 17:20:30 +0000 UTC,LastTransitionTime:2023-01-20 17:07:32 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-20 17:20:30 +0000 UTC,LastTransitionTime:2023-01-20 17:07:32 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-20 17:20:30 +0000 UTC,LastTransitionTime:2023-01-20 17:10:57 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.44.83,},NodeAddress{Type:ExternalIP,Address:3.85.92.171,},NodeAddress{Type:InternalDNS,Address:i-0460dbd3e490039bb.ec2.internal,},NodeAddress{Type:Hostname,Address:i-0460dbd3e490039bb.ec2.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-3-85-92-171.compute-1.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec214ec8f7aec9bca6997e12c5d9fa17,SystemUUID:ec214ec8-f7ae-c9bc-a699-7e12c5d9fa17,BootID:6958a09a-b123-4522-ba50-97e69196d1e0,KernelVersion:5.15.86-flatcar,OSImage:Flatcar Container Linux by Kinvolk 3446.1.0 (Oklo),ContainerRuntimeVersion:containerd://1.6.10,KubeletVersion:v1.26.1,KubeProxyVersion:v1.26.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c registry.k8s.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.26.1],SizeBytes:67205316,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:f0c5de192d832e7c1daa6580d4a62e8fa6fc8eabc0917ae4cb7ed4d15e95b59e registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.1],SizeBytes:29725845,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[docker.io/rancher/mirrored-flannelcni-flannel@sha256:c9786f434d4663c924aeca1a2e479786d63df0d56c5d6bd62a64915f81d62ff0 docker.io/rancher/mirrored-flannelcni-flannel:v0.19.2],SizeBytes:20503771,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:8240918,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:5c99cf6a02adda929b10321dbf4ecfa00d87be9ba4fb456006237d530ab4baa1 registry.k8s.io/e2e-test-images/nginx:1.14-4],SizeBytes:6978614,},ContainerImage{Names:[docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin@sha256:28d3a6be9f450282bf42e4dad143d41da23e3d91f66f19c01ee7fd21fd17cb2b docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.0],SizeBytes:3821285,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:2110879,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[kubernetes.io/csi/csi-hostpath-ephemeral-6516^a6d3a37e-98e6-11ed-b9ef-a2c5fd84bcd1 kubernetes.io/csi/ebs.csi.aws.com^vol-0bfed00e78ca4b211],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/ebs.csi.aws.com^vol-0bfed00e78ca4b211,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-ephemeral-6516^a6d3a37e-98e6-11ed-b9ef-a2c5fd84bcd1,DevicePath:,},},Config:nil,},} Jan 20 17:21:11.124: INFO: Logging kubelet events for node i-0460dbd3e490039bb Jan 20 17:21:11.157: INFO: Logging pods the kubelet thinks is on node i-0460dbd3e490039bb Jan 20 17:21:11.195: INFO: kube-flannel-ds-q8m2b started at 2023-01-20 17:07:53 +0000 UTC (2+1 container statuses recorded) Jan 20 17:21:11.195: INFO: Init container install-cni-plugin ready: true, restart count 1 Jan 20 17:21:11.195: INFO: Init container install-cni ready: true, restart count 0 Jan 20 17:21:11.195: INFO: Container kube-flannel ready: true, restart count 2 Jan 20 17:21:11.195: INFO: netserver-1 started at 2023-01-20 17:15:26 +0000 UTC (0+1 container statuses recorded) Jan 20 17:21:11.195: INFO: Container webserver ready: true, restart count 0 Jan 20 17:21:11.195: INFO: test-grpc-46eabcb2-0c4a-4520-810d-ba498e0fcbea started at 2023-01-20 17:15:20 +0000 UTC (0+1 container statuses recorded) Jan 20 17:21:11.195: INFO: Container etcd ready: true, restart count 0 Jan 20 17:21:11.195: INFO: test-pod-1 started at 2023-01-20 17:15:25 +0000 UTC (0+1 container statuses recorded) Jan 20 17:21:11.195: INFO: Container token-test ready: true, restart count 0 Jan 20 17:21:11.195: INFO: ebs-csi-node-kmj84 started at 2023-01-20 17:07:53 +0000 UTC (0+3 container statuses recorded) Jan 20 17:21:11.195: INFO: Container ebs-plugin ready: true, restart count 1 Jan 20 17:21:11.195: INFO: Container liveness-probe ready: true, restart count 1 Jan 20 17:21:11.195: INFO: Container node-driver-registrar ready: true, restart count 1 Jan 20 17:21:11.195: INFO: downwardapi-volume-65e507d7-2728-4f27-b145-837b0a794a2f started at 2023-01-20 17:15:24 +0000 UTC (0+1 container statuses recorded) Jan 20 17:21:11.195: INFO: Container client-container ready: false, restart count 0 Jan 20 17:21:11.195: INFO: startup-04b7934a-c3e8-415c-ba2f-32e3d709e2f1 started at 2023-01-20 17:14:57 +0000 UTC (0+1 container statuses recorded) Jan 20 17:21:11.195: INFO: Container busybox ready: false, restart count 0 Jan 20 17:21:11.195: INFO: local-injector started at <nil> (0+0 container statuses recorded) Jan 20 17:21:11.195: INFO: service-proxy-disabled-hc668 started at 2023-01-20 17:14:55 +0000 UTC (0+1 container statuses recorded) Jan 20 17:21:11.195: INFO: Container service-proxy-disabled ready: true, restart count 0 Jan 20 17:21:11.195: INFO: test-ss-0 started at 2023-01-20 17:19:56 +0000 UTC (0+1 container statuses recorded) Jan 20 17:21:11.195: INFO: Container webserver ready: true, restart count 0 Jan 20 17:21:11.195: INFO: hostexec-i-0460dbd3e490039bb-ckrz7 started at 2023-01-20 17:20:52 +0000 UTC (0+1 container statuses recorded) Jan 20 17:21:11.195: INFO: Container agnhost-container ready: true, restart count 0 Jan 20 17:21:11.195: INFO: test-webserver-de11aec3-5b9d-4460-b199-b75d4012849c started at 2023-01-20 17:20:35 +0000 UTC (0+1 container statuses recorded) Jan 20 17:21:11.195: INFO: Container test-webserver ready: true, restart count 0 Jan 20 17:21:11.195: INFO: rs-p9trf started at <nil> (0+0 container statuses recorded) Jan 20 17:21:11.195: INFO: hostexec-i-0460dbd3e490039bb-gl7xm started at 2023-01-20 17:15:20 +0000 UTC (0+1 container statuses recorded) Jan 20 17:21:11.195: INFO: Container agnhost-container ready: true, restart count 0 Jan 20 17:21:11.195: INFO: rs-qgfpj started at <nil> (0+0 container statuses recorded) Jan 20 17:21:11.195: INFO: rs-hhmgw started at <nil> (0+0 container statuses recorded) Jan 20 17:21:11.195: INFO: kube-proxy-i-0460dbd3e490039bb started at 2023-01-20 17:07:33 +0000 UTC (0+1 container statuses recorded) Jan 20 17:21:11.195: INFO: Container kube-proxy ready: true, restart count 1 Jan 20 17:21:11.195: INFO: csi-hostpathplugin-0 started at 2023-01-20 17:19:42 +0000 UTC (0+7 container statuses recorded) Jan 20 17:21:11.195: INFO: Container csi-attacher ready: true, restart count 0 Jan 20 17:21:11.195: INFO: Container csi-provisioner ready: true, restart count 0 Jan 20 17:21:11.195: INFO: Container csi-resizer ready: true, restart count 0 Jan 20 17:21:11.195: INFO: Container csi-snapshotter ready: true, restart count 0 Jan 20 17:21:11.195: INFO: Container hostpath ready: true, restart count 0 Jan 20 17:21:11.195: INFO: Container liveness-probe ready: true, restart count 0 Jan 20 17:21:11.195: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 20 17:21:11.195: INFO: master started at 2023-01-20 17:21:07 +0000 UTC (0+1 container statuses recorded) Jan 20 17:21:11.195: INFO: Container cntr ready: false, restart count 0 Jan 20 17:21:11.195: INFO: pod-d9b2c311-b86f-4135-a026-635f052e5073 started at 2023-01-20 17:15:13 +0000 UTC (0+1 container statuses recorded) Jan 20 17:21:11.195: INFO: Container write-pod ready: true, restart count 0 Jan 20 17:21:11.195: INFO: busybox-readonly-fscac6b863-d493-44c3-af92-2541b7e24dda started at 2023-01-20 17:20:31 +0000 UTC (0+1 container statuses recorded) Jan 20 17:21:11.195: INFO: Container busybox-readonly-fscac6b863-d493-44c3-af92-2541b7e24dda ready: true, restart count 0 Jan 20 17:21:11.195: INFO: ss2-1 started at 2023-01-20 17:21:02 +0000 UTC (0+1 container statuses recorded) Jan 20 17:21:11.195: INFO: Container webserver ready: true, restart count 0 Jan 20 17:21:11.195: INFO: verify-service-down-host-exec-pod started at 2023-01-20 17:15:22 +0000 UTC (0+1 container statuses recorded) Jan 20 17:21:11.195: INFO: Container agnhost-container ready: true, restart count 0 Jan 20 17:21:11.195: INFO: simpletest.rc-jrszk started at 2023-01-20 17:15:26 +0000 UTC (0+1 container statuses recorded) Jan 20 17:21:11.195: INFO: Container nginx ready: true, restart count 0 Jan 20 17:21:11.195: INFO: service-proxy-toggled-bvmzm started at 2023-01-20 17:15:04 +0000 UTC (0+1 container statuses recorded) Jan 20 17:21:11.195: INFO: Container service-proxy-toggled ready: true, restart count 0 Jan 20 17:21:11.195: INFO: csi-mockplugin-0 started at 2023-01-20 17:20:05 +0000 UTC (0+4 container statuses recorded) Jan 20 17:21:11.195: INFO: Container busybox ready: true, restart count 0 Jan 20 17:21:11.195: INFO: Container csi-provisioner ready: true, restart count 0 Jan 20 17:21:11.195: INFO: Container driver-registrar ready: true, restart count 0 Jan 20 17:21:11.195: INFO: Container mock ready: true, restart count 0 Jan 20 17:21:11.195: INFO: inline-volume-tester-v4x6v started at 2023-01-20 17:19:53 +0000 UTC (0+1 container statuses recorded) Jan 20 17:21:11.195: INFO: Container csi-volume-tester ready: true, restart count 0 Jan 20 17:21:11.195: INFO: test-rs-7ltg2 started at 2023-01-20 17:21:01 +0000 UTC (0+2 container statuses recorded) Jan 20 17:21:11.195: INFO: Container httpd ready: true, restart count 0 Jan 20 17:21:11.195: INFO: Container test-rs ready: true, restart count 0 Jan 20 17:21:11.195: INFO: hostexec-i-0460dbd3e490039bb-hzvj9 started at 2023-01-20 17:21:05 +0000 UTC (0+1 container statuses recorded) Jan 20 17:21:11.195: INFO: Container agnhost-container ready: true, restart count 0 Jan 20 17:21:11.586: INFO: Latency metrics for node i-0460dbd3e490039bb Jan 20 17:21:11.586: INFO: Logging node info for node i-048afc59cd0c5fa4a Jan 20 17:21:11.615: INFO: Node Info: &Node{ObjectMeta:{i-048afc59cd0c5fa4a 906bdaca-cfdb-4619-98d1-2751663efe41 16951 0 2023-01-20 17:07:51 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:us-east-1 failure-domain.beta.kubernetes.io/zone:us-east-1a kubernetes.io/arch:amd64 kubernetes.io/hostname:i-048afc59cd0c5fa4a kubernetes.io/os:linux node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:us-east-1a topology.hostpath.csi/node:i-048afc59cd0c5fa4a topology.kubernetes.io/region:us-east-1 topology.kubernetes.io/zone:us-east-1a] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-volume-expand-8612":"i-048afc59cd0c5fa4a","ebs.csi.aws.com":"i-048afc59cd0c5fa4a"} flannel.alpha.coreos.com/backend-data:{"VNI":1,"VtepMAC":"52:68:72:e8:79:3f"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:172.20.41.86 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{aws-cloud-controller-manager Update v1 2023-01-20 17:07:51 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:beta.kubernetes.io/instance-type":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {aws-cloud-controller-manager Update v1 2023-01-20 17:07:51 +0000 UTC FieldsV1 {"f:status":{"f:addresses":{"k:{\"type\":\"ExternalDNS\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"ExternalIP\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"Hostname\"}":{"f:address":{}},"k:{\"type\":\"InternalDNS\"}":{".":{},"f:address":{},"f:type":{}}}}} status} {kops-controller Update v1 2023-01-20 17:07:51 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:node-role.kubernetes.io/node":{}}}} } {kubelet Update v1 2023-01-20 17:07:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}}} } {kube-controller-manager Update v1 2023-01-20 17:18:07 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.1.0/24\"":{}}}} } {flanneld Update v1 2023-01-20 17:18:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2023-01-20 17:20:48 +0000 UTC FieldsV1 {"f:status":{"f:volumesAttached":{}}} status} {kubelet Update v1 2023-01-20 17:21:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:allocatable":{"f:memory":{}},"f:capacity":{"f:memory":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{},"f:kernelVersion":{},"f:osImage":{}},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.1.0/24,DoNotUseExternalID:,ProviderID:aws:///us-east-1a/i-048afc59cd0c5fa4a,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{47441653760 0} {<nil>} 46329740Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4054806528 0} {<nil>} 3959772Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{42697488314 0} {<nil>} 42697488314 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3949948928 0} {<nil>} 3857372Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-20 17:18:12 +0000 UTC,LastTransitionTime:2023-01-20 17:18:12 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-20 17:20:49 +0000 UTC,LastTransitionTime:2023-01-20 17:07:30 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-20 17:20:49 +0000 UTC,LastTransitionTime:2023-01-20 17:07:30 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-20 17:20:49 +0000 UTC,LastTransitionTime:2023-01-20 17:07:30 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-20 17:20:49 +0000 UTC,LastTransitionTime:2023-01-20 17:18:06 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.41.86,},NodeAddress{Type:ExternalIP,Address:34.201.135.194,},NodeAddress{Type:InternalDNS,Address:i-048afc59cd0c5fa4a.ec2.internal,},NodeAddress{Type:Hostname,Address:i-048afc59cd0c5fa4a.ec2.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-34-201-135-194.compute-1.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec2886bb32c49932d355813f2015452a,SystemUUID:ec2886bb-32c4-9932-d355-813f2015452a,BootID:c3c6217a-92a9-4cf1-a92f-5cf2a5908c35,KernelVersion:5.15.86-flatcar,OSImage:Flatcar Container Linux by Kinvolk 3446.1.0 (Oklo),ContainerRuntimeVersion:containerd://1.6.10,KubeletVersion:v1.26.1,KubeProxyVersion:v1.26.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.26.1],SizeBytes:67205316,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:f0c5de192d832e7c1daa6580d4a62e8fa6fc8eabc0917ae4cb7ed4d15e95b59e registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.1],SizeBytes:29725845,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[docker.io/rancher/mirrored-flannelcni-flannel@sha256:c9786f434d4663c924aeca1a2e479786d63df0d56c5d6bd62a64915f81d62ff0 docker.io/rancher/mirrored-flannelcni-flannel:v0.19.2],SizeBytes:20503771,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:68d396900aeaa072c1f27289485fdac29834045a6f3ffe369bf389d830ef572d registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.6],SizeBytes:20293261,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:8240918,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:5c99cf6a02adda929b10321dbf4ecfa00d87be9ba4fb456006237d530ab4baa1 registry.k8s.io/e2e-test-images/nginx:1.14-4],SizeBytes:6978614,},ContainerImage{Names:[docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin@sha256:28d3a6be9f450282bf42e4dad143d41da23e3d91f66f19c01ee7fd21fd17cb2b docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.0],SizeBytes:3821285,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[kubernetes.io/csi/csi-hostpath-volume-expand-8612^c7536b19-98e6-11ed-b315-9a35c1137196 kubernetes.io/csi/ebs.csi.aws.com^vol-02c3c5599ae572bad kubernetes.io/csi/ebs.csi.aws.com^vol-03eb8ec2c9b202513],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/ebs.csi.aws.com^vol-03eb8ec2c9b202513,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/ebs.csi.aws.com^vol-02c3c5599ae572bad,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-volume-expand-8612^c7536b19-98e6-11ed-b315-9a35c1137196,DevicePath:,},},Config:nil,},} Jan 20 17:21:11.616: INFO: Logging kubelet events for node i-048afc59cd0c5fa4a Jan 20 17:21:11.648: INFO: Logging pods the kubelet thinks is on node i-048afc59cd0c5fa4a Jan 20 17:21:11.689: INFO: coredns-autoscaler-7cb5c5b969-kxr22 started at 2023-01-20 17:07:54 +0000 UTC (0+1 container statuses recorded) Jan 20 17:21:11.689: INFO: Container autoscaler ready: false, restart count 0 Jan 20 17:21:11.689: INFO: ebs-csi-node-dkvln started at 2023-01-20 17:18:06 +0000 UTC (0+3 container statuses recorded) Jan 20 17:21:11.689: INFO: Container ebs-plugin ready: true, restart count 0 Jan 20 17:21:11.689: INFO: Container liveness-probe ready: true, restart count 0 Jan 20 17:21:11.689: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 20 17:21:11.689: INFO: rs-5f8bq started at 2023-01-20 17:21:09 +0000 UTC (0+1 container statuses recorded) Jan 20 17:21:11.689: INFO: Container donothing ready: true, restart count 0 Jan 20 17:21:11.689: INFO: hostexec-i-048afc59cd0c5fa4a-llkw5 started at 2023-01-20 17:21:08 +0000 UTC (0+1 container statuses recorded) Jan 20 17:21:11.689: INFO: Container agnhost-container ready: true, restart count 0 Jan 20 17:21:11.689: INFO: kube-proxy-i-048afc59cd0c5fa4a started at 2023-01-20 17:07:31 +0000 UTC (0+1 container statuses recorded) Jan 20 17:21:11.689: INFO: Container kube-proxy ready: true, restart count 1 Jan 20 17:21:11.689: INFO: coredns-559769c974-mkzlp started at 2023-01-20 17:07:54 +0000 UTC (0+1 container statuses recorded) Jan 20 17:21:11.689: INFO: Container coredns ready: true, restart count 1 Jan 20 17:21:11.689: INFO: netserver-2 started at 2023-01-20 17:15:26 +0000 UTC (0+1 container statuses recorded) Jan 20 17:21:11.689: INFO: Container webserver ready: false, restart count 0 Jan 20 17:21:11.689: INFO: hostexec-i-048afc59cd0c5fa4a-8wm2n started at 2023-01-20 17:21:07 +0000 UTC (0+1 container statuses recorded) Jan 20 17:21:11.689: INFO: Container agnhost-container ready: true, restart count 0 Jan 20 17:21:11.689: INFO: csi-hostpathplugin-0 started at 2023-01-20 17:20:43 +0000 UTC (0+7 container statuses recorded) Jan 20 17:21:11.689: INFO: Container csi-attacher ready: true, restart count 0 Jan 20 17:21:11.689: INFO: Container csi-provisioner ready: true, restart count 0 Jan 20 17:21:11.689: INFO: Container csi-resizer ready: true, restart count 0 Jan 20 17:21:11.689: INFO: Container csi-snapshotter ready: true, restart count 0 Jan 20 17:21:11.689: INFO: Container hostpath ready: true, restart count 0 Jan 20 17:21:11.689: INFO: Container liveness-probe ready: true, restart count 0 Jan 20 17:21:11.689: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 20 17:21:11.689: INFO: inline-volume-tester-sklp8 started at 2023-01-20 17:20:38 +0000 UTC (0+1 container statuses recorded) Jan 20 17:21:11.689: INFO: Container csi-volume-tester ready: true, restart count 0 Jan 20 17:21:11.689: INFO: pod-815393ec-d1b3-4f9a-baf6-39b4fa221095 started at 2023-01-20 17:20:47 +0000 UTC (0+1 container statuses recorded) Jan 20 17:21:11.689: INFO: Container write-pod ready: true, restart count 0 Jan 20 17:21:11.689: INFO: kube-flannel-ds-nlnn2 started at 2023-01-20 17:18:06 +0000 UTC (2+1 container statuses recorded) Jan 20 17:21:11.689: INFO: Init container install-cni-plugin ready: true, restart count 0 Jan 20 17:21:11.689: INFO: Init container install-cni ready: true, restart count 0 Jan 20 17:21:11.689: INFO: Container kube-flannel ready: true, restart count 0 Jan 20 17:21:11.689: INFO: pod-handle-http-request started at 2023-01-20 17:20:54 +0000 UTC (0+2 container statuses recorded) Jan 20 17:21:11.689: INFO: Container container-handle-http-request ready: true, restart count 0 Jan 20 17:21:11.689: INFO: Container container-handle-https-request ready: true, restart count 0 Jan 20 17:21:11.689: INFO: startup-script started at 2023-01-20 17:14:41 +0000 UTC (0+1 container statuses recorded) Jan 20 17:21:11.689: INFO: Container startup-script ready: false, restart count 0 Jan 20 17:21:11.689: INFO: rs-hhr52 started at 2023-01-20 17:21:09 +0000 UTC (0+1 container statuses recorded) Jan 20 17:21:11.689: INFO: Container donothing ready: false, restart count 0 Jan 20 17:21:11.865: INFO: Latency metrics for node i-048afc59cd0c5fa4a Jan 20 17:21:11.865: INFO: Logging node info for node i-0f775d321e19704c3 Jan 20 17:21:11.895: INFO: Node Info: &Node{ObjectMeta:{i-0f775d321e19704c3 19607256-f185-404f-84dd-0198c716bca7 16843 0 2023-01-20 17:07:53 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:us-east-1 failure-domain.beta.kubernetes.io/zone:us-east-1a kubernetes.io/arch:amd64 kubernetes.io/hostname:i-0f775d321e19704c3 kubernetes.io/os:linux node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:us-east-1a topology.hostpath.csi/node:i-0f775d321e19704c3 topology.kubernetes.io/region:us-east-1 topology.kubernetes.io/zone:us-east-1a] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-ephemeral-2555":"i-0f775d321e19704c3","csi-mock-csi-mock-volumes-2120":"i-0f775d321e19704c3","ebs.csi.aws.com":"i-0f775d321e19704c3"} flannel.alpha.coreos.com/backend-data:{"VNI":1,"VtepMAC":"72:43:d6:40:e8:77"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:172.20.55.61 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-20 17:07:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}}} } {aws-cloud-controller-manager Update v1 2023-01-20 17:07:54 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:beta.kubernetes.io/instance-type":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {aws-cloud-controller-manager Update v1 2023-01-20 17:07:54 +0000 UTC FieldsV1 {"f:status":{"f:addresses":{"k:{\"type\":\"ExternalDNS\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"ExternalIP\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"Hostname\"}":{"f:address":{}},"k:{\"type\":\"InternalDNS\"}":{".":{},"f:address":{},"f:type":{}}}}} status} {kops-controller Update v1 2023-01-20 17:07:54 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:node-role.kubernetes.io/node":{}}}} } {kube-controller-manager Update v1 2023-01-20 17:09:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.4.0/24\"":{}}}} } {flanneld Update v1 2023-01-20 17:09:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2023-01-20 17:21:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{},"f:memory":{}},"f:capacity":{"f:memory":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{},"f:kernelVersion":{},"f:osImage":{}},"f:volumesInUse":{}}} status} {kube-controller-manager Update v1 2023-01-20 17:21:08 +0000 UTC FieldsV1 {"f:status":{"f:volumesAttached":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.4.0/24,DoNotUseExternalID:,ProviderID:aws:///us-east-1a/i-0f775d321e19704c3,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{47441653760 0} {<nil>} 46329740Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4054814720 0} {<nil>} 3959780Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{42697488314 0} {<nil>} 42697488314 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3949957120 0} {<nil>} 3857380Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-20 17:09:35 +0000 UTC,LastTransitionTime:2023-01-20 17:09:35 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-20 17:20:52 +0000 UTC,LastTransitionTime:2023-01-20 17:07:33 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-20 17:20:52 +0000 UTC,LastTransitionTime:2023-01-20 17:07:33 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-20 17:20:52 +0000 UTC,LastTransitionTime:2023-01-20 17:07:33 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-20 17:20:52 +0000 UTC,LastTransitionTime:2023-01-20 17:09:30 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.55.61,},NodeAddress{Type:ExternalIP,Address:3.93.201.229,},NodeAddress{Type:InternalDNS,Address:i-0f775d321e19704c3.ec2.internal,},NodeAddress{Type:Hostname,Address:i-0f775d321e19704c3.ec2.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-3-93-201-229.compute-1.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec2a4be20ed59f70fa8678b6d03004b4,SystemUUID:ec2a4be2-0ed5-9f70-fa86-78b6d03004b4,BootID:d3100caa-b833-4d03-b5c0-4cb4a8b87060,KernelVersion:5.15.86-flatcar,OSImage:Flatcar Container Linux by Kinvolk 3446.1.0 (Oklo),ContainerRuntimeVersion:containerd://1.6.10,KubeletVersion:v1.26.1,KubeProxyVersion:v1.26.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.26.1],SizeBytes:67205316,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:f0c5de192d832e7c1daa6580d4a62e8fa6fc8eabc0917ae4cb7ed4d15e95b59e registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.1],SizeBytes:29725845,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[docker.io/rancher/mirrored-flannelcni-flannel@sha256:c9786f434d4663c924aeca1a2e479786d63df0d56c5d6bd62a64915f81d62ff0 docker.io/rancher/mirrored-flannelcni-flannel:v0.19.2],SizeBytes:20503771,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:68d396900aeaa072c1f27289485fdac29834045a6f3ffe369bf389d830ef572d registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.6],SizeBytes:20293261,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:8240918,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:db048754ae68ae337d8fa96494c96d2a1204c3320f5dcf7e8e71085adec85da6 registry.k8s.io/e2e-test-images/nginx:1.15-4],SizeBytes:7000082,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:5c99cf6a02adda929b10321dbf4ecfa00d87be9ba4fb456006237d530ab4baa1 registry.k8s.io/e2e-test-images/nginx:1.14-4],SizeBytes:6978614,},ContainerImage{Names:[docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin@sha256:28d3a6be9f450282bf42e4dad143d41da23e3d91f66f19c01ee7fd21fd17cb2b docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.0],SizeBytes:3821285,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac registry.k8s.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[kubernetes.io/csi/csi-hostpath-ephemeral-2555^c94f5c01-98e6-11ed-af4a-fe86566bd700 kubernetes.io/csi/csi-mock-csi-mock-volumes-5223^c8620916-98e5-11ed-99ff-464f19649f33],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-mock-csi-mock-volumes-5223^c8620916-98e5-11ed-99ff-464f19649f33,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-ephemeral-2555^c94f5c01-98e6-11ed-af4a-fe86566bd700,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/csi-mock-csi-mock-volumes-2120^d388d496-98e6-11ed-bf2c-ba4c1e90427d,DevicePath:,},},Config:nil,},} Jan 20 17:21:11.896: INFO: Logging kubelet events for node i-0f775d321e19704c3 Jan 20 17:21:11.933: INFO: Logging pods the kubelet thinks is on node i-0f775d321e19704c3 Jan 20 17:21:11.983: INFO: csi-mockplugin-attacher-0 started at 2023-01-20 17:21:02 +0000 UTC (0+1 container statuses recorded) Jan 20 17:21:11.983: INFO: Container csi-attacher ready: true, restart count 0 Jan 20 17:21:11.983: INFO: service-proxy-disabled-jg82r started at 2023-01-20 17:17:52 +0000 UTC (0+1 container statuses recorded) Jan 20 17:21:11.983: INFO: Container service-proxy-disabled ready: true, restart count 0 Jan 20 17:21:11.983: INFO: update-demo-nautilus-mdjl2 started at <nil> (0+0 container statuses recorded) Jan 20 17:21:11.983: INFO: kube-flannel-ds-d9rm4 started at 2023-01-20 17:07:54 +0000 UTC (2+1 container statuses recorded) Jan 20 17:21:11.983: INFO: Init container install-cni-plugin ready: true, restart count 1 Jan 20 17:21:11.983: INFO: Init container install-cni ready: true, restart count 0 Jan 20 17:21:11.983: INFO: Container kube-flannel ready: true, restart count 2 Jan 20 17:21:11.983: INFO: service-proxy-toggled-8j48l started at 2023-01-20 17:15:04 +0000 UTC (0+1 container statuses recorded) Jan 20 17:21:11.983: INFO: Container service-proxy-toggled ready: true, restart count 0 Jan 20 17:21:11.983: INFO: inline-volume-tester-5rjwh started at 2023-01-20 17:20:50 +0000 UTC (0+1 container statuses recorded) Jan 20 17:21:11.983: INFO: Container csi-volume-tester ready: true, restart count 0 Jan 20 17:21:11.983: INFO: service-proxy-disabled-xwb98 started at 2023-01-20 17:14:55 +0000 UTC (0+1 container statuses recorded) Jan 20 17:21:11.983: INFO: Container service-proxy-disabled ready: true, restart count 0 Jan 20 17:21:11.983: INFO: hostexec-i-0f775d321e19704c3-58mj2 started at 2023-01-20 17:20:45 +0000 UTC (0+1 container statuses recorded) Jan 20 17:21:11.983: INFO: Container agnhost-container ready: true, restart count 0 Jan 20 17:21:11.983: INFO: rs-zlddj started at <nil> (0+0 container statuses recorded) Jan 20 17:21:11.983: INFO: ss2-2 started at 2023-01-20 17:21:06 +0000 UTC (0+1 container statuses recorded) Jan 20 17:21:11.983: INFO: Container webserver ready: true, restart count 0 Jan 20 17:21:11.983: INFO: pvc-volume-tester-v7khp started at 2023-01-20 17:13:41 +0000 UTC (0+1 container statuses recorded) Jan 20 17:21:11.983: INFO: Container volume-tester ready: false, restart count 0 Jan 20 17:21:11.983: INFO: test-pod-3 started at 2023-01-20 17:15:25 +0000 UTC (0+1 container statuses recorded) Jan 20 17:21:11.983: INFO: Container token-test ready: true, restart count 0 Jan 20 17:21:11.983: INFO: coredns-autoscaler-7cb5c5b969-zvbqv started at 2023-01-20 17:17:40 +0000 UTC (0+1 container statuses recorded) Jan 20 17:21:11.983: INFO: Container autoscaler ready: true, restart count 0 Jan 20 17:21:11.983: INFO: csi-mockplugin-0 started at 2023-01-20 17:21:02 +0000 UTC (0+3 container statuses recorded) Jan 20 17:21:11.983: INFO: Container csi-provisioner ready: true, restart count 0 Jan 20 17:21:11.983: INFO: Container driver-registrar ready: true, restart count 0 Jan 20 17:21:11.983: INFO: Container mock ready: true, restart count 0 Jan 20 17:21:11.983: INFO: pod-subpath-test-preprovisionedpv-hgsq started at 2023-01-20 17:21:02 +0000 UTC (0+1 container statuses recorded) Jan 20 17:21:11.983: INFO: Container test-container-subpath-preprovisionedpv-hgsq ready: false, restart count 0 Jan 20 17:21:11.983: INFO: pvc-volume-tester-9fb2m started at <nil> (0+0 container statuses recorded) Jan 20 17:21:11.983: INFO: pod-ephm-test-configmap-4dsq started at 2023-01-20 17:19:34 +0000 UTC (0+1 container statuses recorded) Jan 20 17:21:11.983: INFO: Container test-container-subpath-configmap-4dsq ready: false, restart count 0 Jan 20 17:21:11.983: INFO: test-rs-vgnbc started at 2023-01-20 17:21:00 +0000 UTC (0+1 container statuses recorded) Jan 20 17:21:11.983: INFO: Container httpd ready: true, restart count 0 Jan 20 17:21:11.983: INFO: kube-proxy-i-0f775d321e19704c3 started at 2023-01-20 17:07:34 +0000 UTC (0+1 container statuses recorded) Jan 20 17:21:11.983: INFO: Container kube-proxy ready: true, restart count 1 Jan 20 17:21:11.983: INFO: test-pod-2 started at 2023-01-20 17:15:25 +0000 UTC (0+1 container statuses recorded) Jan 20 17:21:11.983: INFO: Container token-test ready: true, restart count 0 Jan 20 17:21:11.983: INFO: netserver-3 started at 2023-01-20 17:15:26 +0000 UTC (0+1 container statuses recorded) Jan 20 17:21:11.983: INFO: Container webserver ready: true, restart count 0 Jan 20 17:21:11.983: INFO: affinity-clusterip-transition-kgkp8 started at 2023-01-20 17:20:48 +0000 UTC (0+1 container statuses recorded) Jan 20 17:21:11.983: INFO: Container affinity-clusterip-transition ready: true, restart count 0 Jan 20 17:21:11.983: INFO: csi-hostpathplugin-0 started at 2023-01-20 17:20:47 +0000 UTC (0+7 container statuses recorded) Jan 20 17:21:11.983: INFO: Container csi-attacher ready: true, restart count 0 Jan 20 17:21:11.983: INFO: Container csi-provisioner ready: true, restart count 0 Jan 20 17:21:11.983: INFO: Container csi-resizer ready: true, restart count 0 Jan 20 17:21:11.983: INFO: Container csi-snapshotter ready: true, restart count 0 Jan 20 17:21:11.983: INFO: Container hostpath ready: true, restart count 0 Jan 20 17:21:11.983: INFO: Container liveness-probe ready: true, restart count 0 Jan 20 17:21:11.983: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 20 17:21:11.983: INFO: rs-k74s9 started at <nil> (0+0 container statuses recorded) Jan 20 17:21:11.983: INFO: test-ss-1 started at 2023-01-20 17:20:56 +0000 UTC (0+1 container statuses recorded) Jan 20 17:21:11.983: INFO: Container webserver ready: true, restart count 0 Jan 20 17:21:11.983: INFO: simpletest.rc-9xd2k started at 2023-01-20 17:15:26 +0000 UTC (0+1 container statuses recorded) Jan 20 17:21:11.983: INFO: Container nginx ready: true, restart count 0 Jan 20 17:21:11.983: INFO: ebs-csi-node-74dsh started at 2023-01-20 17:07:54 +0000 UTC (0+3 container statuses recorded) Jan 20 17:21:11.983: INFO: Container ebs-plugin ready: true, restart count 1 Jan 20 17:21:11.983: INFO: Container liveness-probe ready: true, restart count 1 Jan 20 17:21:11.983: INFO: Container node-driver-registrar ready: true, restart count 1 Jan 20 17:21:12.195: INFO: Latency metrics for node i-0f775d321e19704c3 �[1mSTEP:�[0m Waiting for namespaces [volumemode-4314-8917] to vanish �[38;5;243m01/20/23 17:21:12.226�[0m [DeferCleanup (Each)] [Testpattern: Dynamic PV (block volmode)] volumeMode test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [Testpattern: Dynamic PV (block volmode)] volumeMode dump namespaces | framework.go:196 �[1mSTEP:�[0m dump namespace information after failure �[38;5;243m01/20/23 17:21:18.257�[0m [DeferCleanup (Each)] [Testpattern: Dynamic PV (block volmode)] volumeMode tear down framework | framework.go:193
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-storage\]\sCSI\smock\svolume\sCSI\sVolume\sexpansion\sshould\sexpand\svolume\swithout\srestarting\spod\sif\snodeExpansion\=off$'
test/e2e/framework/debug/dump.go:44 k8s.io/kubernetes/test/e2e/framework/debug.dumpEventsInNamespace(0xc003c47da0, {0xc004bdd180, 0x1a}) test/e2e/framework/debug/dump.go:44 +0x191 k8s.io/kubernetes/test/e2e/framework/debug.DumpAllNamespaceInfo({0x8022ee8, 0xc003a5f1e0}, {0xc004bdd180, 0x1a}) test/e2e/framework/debug/dump.go:62 +0x8d k8s.io/kubernetes/test/e2e/framework/debug/init.init.0.func1.1(0xc004d957c0?, {0xc004bdd180?, 0x2?}) test/e2e/framework/debug/init/init.go:34 +0x32 k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach.func1() test/e2e/framework/framework.go:341 +0x82d k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc000f944b0) test/e2e/framework/framework.go:383 +0x1ca reflect.Value.call({0x662c060?, 0xc00100dfb0?, 0xc003be5fb0?}, {0x75bb752, 0x4}, {0xae7b420, 0x0, 0xc000f2fdc8?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x662c060?, 0xc00100dfb0?, 0x2946afc?}, {0xae7b420?, 0xc003be5f80?, 0xc0028cc320?}) /usr/local/go/src/reflect/value.go:368 +0xbcfrom junit_01.xml
[BeforeEach] [sig-storage] CSI mock volume set up framework | framework.go:178 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/20/23 17:14:52.989�[0m Jan 20 17:14:52.989: INFO: >>> kubeConfig: /root/.kube/config �[1mSTEP:�[0m Building a namespace api object, basename csi-mock-volumes �[38;5;243m01/20/23 17:14:52.99�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/20/23 17:14:53.088�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/20/23 17:14:53.146�[0m [BeforeEach] [sig-storage] CSI mock volume test/e2e/framework/metrics/init/init.go:31 [It] should expand volume without restarting pod if nodeExpansion=off test/e2e/storage/csi_mock_volume.go:700 �[1mSTEP:�[0m Building a driver namespace object, basename csi-mock-volumes-3661 �[38;5;243m01/20/23 17:14:53.204�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/20/23 17:14:53.295�[0m �[1mSTEP:�[0m deploying csi mock driver �[38;5;243m01/20/23 17:14:53.353�[0m Jan 20 17:14:53.486: INFO: creating *v1.ServiceAccount: csi-mock-volumes-3661-1300/csi-attacher Jan 20 17:14:53.541: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-3661 Jan 20 17:14:53.541: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-3661 Jan 20 17:14:53.579: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-3661 Jan 20 17:14:53.610: INFO: creating *v1.Role: csi-mock-volumes-3661-1300/external-attacher-cfg-csi-mock-volumes-3661 Jan 20 17:14:53.648: INFO: creating *v1.RoleBinding: csi-mock-volumes-3661-1300/csi-attacher-role-cfg Jan 20 17:14:53.680: INFO: creating *v1.ServiceAccount: csi-mock-volumes-3661-1300/csi-provisioner Jan 20 17:14:53.718: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-3661 Jan 20 17:14:53.718: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-3661 Jan 20 17:14:53.751: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-3661 Jan 20 17:14:53.782: INFO: creating *v1.Role: csi-mock-volumes-3661-1300/external-provisioner-cfg-csi-mock-volumes-3661 Jan 20 17:14:53.825: INFO: creating *v1.RoleBinding: csi-mock-volumes-3661-1300/csi-provisioner-role-cfg Jan 20 17:14:53.857: INFO: creating *v1.ServiceAccount: csi-mock-volumes-3661-1300/csi-resizer Jan 20 17:14:53.888: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-3661 Jan 20 17:14:53.888: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-3661 Jan 20 17:14:53.928: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-3661 Jan 20 17:14:53.961: INFO: creating *v1.Role: csi-mock-volumes-3661-1300/external-resizer-cfg-csi-mock-volumes-3661 Jan 20 17:14:53.994: INFO: creating *v1.RoleBinding: csi-mock-volumes-3661-1300/csi-resizer-role-cfg Jan 20 17:14:54.032: INFO: creating *v1.ServiceAccount: csi-mock-volumes-3661-1300/csi-snapshotter Jan 20 17:14:54.067: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-3661 Jan 20 17:14:54.067: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-3661 Jan 20 17:14:54.104: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-3661 Jan 20 17:14:54.138: INFO: creating *v1.Role: csi-mock-volumes-3661-1300/external-snapshotter-leaderelection-csi-mock-volumes-3661 Jan 20 17:14:54.180: INFO: creating *v1.RoleBinding: csi-mock-volumes-3661-1300/external-snapshotter-leaderelection Jan 20 17:14:54.222: INFO: creating *v1.ServiceAccount: csi-mock-volumes-3661-1300/csi-mock Jan 20 17:14:54.260: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-3661 Jan 20 17:14:54.291: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-3661 Jan 20 17:14:54.337: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-3661 Jan 20 17:14:54.370: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-3661 Jan 20 17:14:54.405: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-3661 Jan 20 17:14:54.439: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-3661 Jan 20 17:14:54.483: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-3661 Jan 20 17:14:54.523: INFO: creating *v1.StatefulSet: csi-mock-volumes-3661-1300/csi-mockplugin Jan 20 17:14:54.570: INFO: creating *v1.StatefulSet: csi-mock-volumes-3661-1300/csi-mockplugin-attacher Jan 20 17:14:54.611: INFO: creating *v1.StatefulSet: csi-mock-volumes-3661-1300/csi-mockplugin-resizer Jan 20 17:14:54.682: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-3661 to register on node i-048afc59cd0c5fa4a �[1mSTEP:�[0m Creating pod �[38;5;243m01/20/23 17:15:04.399�[0m Jan 20 17:15:04.433: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Jan 20 17:15:04.468: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-m66sc] to have phase Bound Jan 20 17:15:04.498: INFO: PersistentVolumeClaim pvc-m66sc found but phase is Pending instead of Bound. Jan 20 17:15:06.529: INFO: PersistentVolumeClaim pvc-m66sc found and phase=Bound (2.060510125s) Jan 20 17:15:06.624: INFO: Waiting up to 5m0s for pod "pvc-volume-tester-q2j9j" in namespace "csi-mock-volumes-3661" to be "running" Jan 20 17:15:06.655: INFO: Pod "pvc-volume-tester-q2j9j": Phase="Pending", Reason="", readiness=false. Elapsed: 31.716714ms Jan 20 17:15:08.686: INFO: Pod "pvc-volume-tester-q2j9j": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062910985s Jan 20 17:15:10.686: INFO: Pod "pvc-volume-tester-q2j9j": Phase="Pending", Reason="", readiness=false. Elapsed: 4.062422105s Jan 20 17:15:12.687: INFO: Pod "pvc-volume-tester-q2j9j": Phase="Running", Reason="", readiness=true. Elapsed: 6.063649874s Jan 20 17:15:12.687: INFO: Pod "pvc-volume-tester-q2j9j" satisfied condition "running" �[1mSTEP:�[0m Expanding current pvc �[38;5;243m01/20/23 17:15:12.687�[0m �[1mSTEP:�[0m Waiting for persistent volume resize to finish �[38;5;243m01/20/23 17:15:12.758�[0m �[1mSTEP:�[0m Waiting for PVC resize to finish �[38;5;243m01/20/23 17:15:14.825�[0m �[1mSTEP:�[0m Deleting pod pvc-volume-tester-q2j9j �[38;5;243m01/20/23 17:15:14.859�[0m Jan 20 17:15:14.859: INFO: Deleting pod "pvc-volume-tester-q2j9j" in namespace "csi-mock-volumes-3661" Jan 20 17:15:14.897: INFO: Wait up to 5m0s for pod "pvc-volume-tester-q2j9j" to be fully deleted �[1mSTEP:�[0m Deleting claim pvc-m66sc �[38;5;243m01/20/23 17:15:18.959�[0m Jan 20 17:15:19.025: INFO: Waiting up to 2m0s for PersistentVolume pvc-59cc2840-3686-46cc-b054-abed5ecbe0bd to get deleted Jan 20 17:15:19.054: INFO: PersistentVolume pvc-59cc2840-3686-46cc-b054-abed5ecbe0bd found and phase=Released (29.095809ms) Jan 20 17:15:21.084: INFO: PersistentVolume pvc-59cc2840-3686-46cc-b054-abed5ecbe0bd was removed �[1mSTEP:�[0m Deleting storageclass csi-mock-volumes-3661-scqj52k �[38;5;243m01/20/23 17:15:21.084�[0m �[1mSTEP:�[0m Cleaning up resources �[38;5;243m01/20/23 17:15:21.118�[0m [AfterEach] [sig-storage] CSI mock volume test/e2e/framework/node/init/init.go:32 Jan 20 17:15:21.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] CSI Volume expansion test/e2e/storage/drivers/csi.go:699 �[1mSTEP:�[0m deleting the test namespace: csi-mock-volumes-3661 �[38;5;243m01/20/23 17:15:21.15�[0m �[1mSTEP:�[0m Waiting for namespaces [csi-mock-volumes-3661] to vanish �[38;5;243m01/20/23 17:15:21.182�[0m �[1mSTEP:�[0m uninstalling csi mock driver �[38;5;243m01/20/23 17:15:27.212�[0m Jan 20 17:15:27.212: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-3661-1300/csi-attacher Jan 20 17:15:27.243: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-3661 Jan 20 17:15:27.276: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-3661 Jan 20 17:15:27.307: INFO: deleting *v1.Role: csi-mock-volumes-3661-1300/external-attacher-cfg-csi-mock-volumes-3661 Jan 20 17:15:27.337: INFO: deleting *v1.RoleBinding: csi-mock-volumes-3661-1300/csi-attacher-role-cfg Jan 20 17:15:27.386: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-3661-1300/csi-provisioner Jan 20 17:15:27.449: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-3661 Jan 20 17:15:47.519: INFO: deleting failed: Delete "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/apis/rbac.authorization.k8s.io/v1/clusterroles/external-provisioner-runner-csi-mock-volumes-3661": http2: server sent GOAWAY and closed the connection; LastStreamID=463, ErrCode=NO_ERROR, debug="" Jan 20 17:15:47.519: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-3661 Jan 20 17:15:47.566: INFO: deleting failed: Delete "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/csi-provisioner-role-csi-mock-volumes-3661": dial tcp 100.26.139.144:443: connect: connection refused Jan 20 17:15:47.566: INFO: deleting *v1.Role: csi-mock-volumes-3661-1300/external-provisioner-cfg-csi-mock-volumes-3661 ERROR: get pod list in csi-mock-volumes-3661-1300: Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/csi-mock-volumes-3661-1300/pods": dial tcp 100.26.139.144:443: connect: connection refused Jan 20 17:15:47.609: INFO: deleting failed: Delete "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/apis/rbac.authorization.k8s.io/v1/namespaces/csi-mock-volumes-3661-1300/roles/external-provisioner-cfg-csi-mock-volumes-3661": dial tcp 100.26.139.144:443: connect: connection refused Jan 20 17:15:47.609: INFO: deleting *v1.RoleBinding: csi-mock-volumes-3661-1300/csi-provisioner-role-cfg ERROR: get pod list in csi-mock-volumes-3661-1300: Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/csi-mock-volumes-3661-1300/pods": dial tcp 100.26.139.144:443: connect: connection refused ERROR: get pod list in csi-mock-volumes-3661-1300: Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/csi-mock-volumes-3661-1300/pods": dial tcp 100.26.139.144:443: connect: connection refused Jan 20 17:15:47.653: INFO: deleting failed: Delete "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/apis/rbac.authorization.k8s.io/v1/namespaces/csi-mock-volumes-3661-1300/rolebindings/csi-provisioner-role-cfg": dial tcp 100.26.139.144:443: connect: connection refused Jan 20 17:15:47.653: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-3661-1300/csi-resizer ERROR: get pod list in csi-mock-volumes-3661-1300: Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/csi-mock-volumes-3661-1300/pods": dial tcp 100.26.139.144:443: connect: connection refused Jan 20 17:15:47.725: INFO: deleting failed: Delete "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/csi-mock-volumes-3661-1300/serviceaccounts/csi-resizer": dial tcp 100.26.139.144:443: connect: connection refused Jan 20 17:15:47.725: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-3661 ERROR: get pod list in csi-mock-volumes-3661-1300: Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/csi-mock-volumes-3661-1300/pods": dial tcp 100.26.139.144:443: connect: connection refused Jan 20 17:15:47.770: INFO: deleting failed: Delete "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/apis/rbac.authorization.k8s.io/v1/clusterroles/external-resizer-runner-csi-mock-volumes-3661": dial tcp 100.26.139.144:443: connect: connection refused Jan 20 17:15:47.770: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-3661 ERROR: get pod list in csi-mock-volumes-3661-1300: Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/csi-mock-volumes-3661-1300/pods": dial tcp 100.26.139.144:443: connect: connection refused Jan 20 17:15:47.814: INFO: deleting failed: Delete "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/csi-resizer-role-csi-mock-volumes-3661": dial tcp 100.26.139.144:443: connect: connection refused Jan 20 17:15:47.814: INFO: deleting *v1.Role: csi-mock-volumes-3661-1300/external-resizer-cfg-csi-mock-volumes-3661 ERROR: get pod list in csi-mock-volumes-3661-1300: Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/csi-mock-volumes-3661-1300/pods": dial tcp 100.26.139.144:443: connect: connection refused Jan 20 17:15:47.855: INFO: deleting failed: Delete "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/apis/rbac.authorization.k8s.io/v1/namespaces/csi-mock-volumes-3661-1300/roles/external-resizer-cfg-csi-mock-volumes-3661": dial tcp 100.26.139.144:443: connect: connection refused Jan 20 17:15:47.855: INFO: deleting *v1.RoleBinding: csi-mock-volumes-3661-1300/csi-resizer-role-cfg Jan 20 17:15:47.896: INFO: deleting failed: Delete "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/apis/rbac.authorization.k8s.io/v1/namespaces/csi-mock-volumes-3661-1300/rolebindings/csi-resizer-role-cfg": dial tcp 100.26.139.144:443: connect: connection refused Jan 20 17:15:47.896: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-3661-1300/csi-snapshotter ERROR: get pod list in csi-mock-volumes-3661-1300: Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/csi-mock-volumes-3661-1300/pods": dial tcp 100.26.139.144:443: connect: connection refused Jan 20 17:15:47.939: INFO: deleting failed: Delete "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/csi-mock-volumes-3661-1300/serviceaccounts/csi-snapshotter": dial tcp 100.26.139.144:443: connect: connection refused Jan 20 17:15:47.939: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-3661 ERROR: get pod list in csi-mock-volumes-3661-1300: Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/csi-mock-volumes-3661-1300/pods": dial tcp 100.26.139.144:443: connect: connection refused Jan 20 17:15:47.978: INFO: deleting failed: Delete "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/apis/rbac.authorization.k8s.io/v1/clusterroles/external-snapshotter-runner-csi-mock-volumes-3661": dial tcp 100.26.139.144:443: connect: connection refused Jan 20 17:15:47.978: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-3661 ERROR: get pod list in csi-mock-volumes-3661-1300: Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/csi-mock-volumes-3661-1300/pods": dial tcp 100.26.139.144:443: connect: connection refused ERROR: get pod list in csi-mock-volumes-3661-1300: Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/csi-mock-volumes-3661-1300/pods": dial tcp 100.26.139.144:443: connect: connection refused Jan 20 17:15:48.038: INFO: deleting failed: Delete "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/csi-snapshotter-role-csi-mock-volumes-3661": dial tcp 100.26.139.144:443: connect: connection refused Jan 20 17:15:48.038: INFO: deleting *v1.Role: csi-mock-volumes-3661-1300/external-snapshotter-leaderelection-csi-mock-volumes-3661 ERROR: get pod list in csi-mock-volumes-3661-1300: Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/csi-mock-volumes-3661-1300/pods": dial tcp 100.26.139.144:443: connect: connection refused Jan 20 17:15:48.081: INFO: deleting failed: Delete "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/apis/rbac.authorization.k8s.io/v1/namespaces/csi-mock-volumes-3661-1300/roles/external-snapshotter-leaderelection-csi-mock-volumes-3661": dial tcp 100.26.139.144:443: connect: connection refused Jan 20 17:15:48.081: INFO: deleting *v1.RoleBinding: csi-mock-volumes-3661-1300/external-snapshotter-leaderelection ERROR: get pod list in csi-mock-volumes-3661-1300: Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/csi-mock-volumes-3661-1300/pods": dial tcp 100.26.139.144:443: connect: connection refused Jan 20 17:15:48.120: INFO: deleting failed: Delete "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/apis/rbac.authorization.k8s.io/v1/namespaces/csi-mock-volumes-3661-1300/rolebindings/external-snapshotter-leaderelection": dial tcp 100.26.139.144:443: connect: connection refused Jan 20 17:15:48.120: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-3661-1300/csi-mock ERROR: get pod list in csi-mock-volumes-3661-1300: Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/csi-mock-volumes-3661-1300/pods": dial tcp 100.26.139.144:443: connect: connection refused Jan 20 17:15:48.162: INFO: deleting failed: Delete "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/csi-mock-volumes-3661-1300/serviceaccounts/csi-mock": dial tcp 100.26.139.144:443: connect: connection refused Jan 20 17:15:48.162: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-3661 Jan 20 17:15:48.201: INFO: deleting failed: Delete "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/csi-controller-attacher-role-csi-mock-volumes-3661": dial tcp 100.26.139.144:443: connect: connection refused Jan 20 17:15:48.201: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-3661 ERROR: get pod list in csi-mock-volumes-3661-1300: Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/csi-mock-volumes-3661-1300/pods": dial tcp 100.26.139.144:443: connect: connection refused Jan 20 17:15:48.250: INFO: deleting failed: Delete "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/csi-controller-provisioner-role-csi-mock-volumes-3661": dial tcp 100.26.139.144:443: connect: connection refused Jan 20 17:15:48.250: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-3661 ERROR: get pod list in csi-mock-volumes-3661-1300: Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/csi-mock-volumes-3661-1300/pods": dial tcp 100.26.139.144:443: connect: connection refused ERROR: get pod list in csi-mock-volumes-3661-1300: Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/csi-mock-volumes-3661-1300/pods": dial tcp 100.26.139.144:443: connect: connection refused Jan 20 17:15:48.290: INFO: deleting failed: Delete "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/csi-controller-cluster-driver-registrar-role-csi-mock-volumes-3661": dial tcp 100.26.139.144:443: connect: connection refused Jan 20 17:15:48.290: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-3661 ERROR: get pod list in csi-mock-volumes-3661-1300: Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/csi-mock-volumes-3661-1300/pods": dial tcp 100.26.139.144:443: connect: connection refused Jan 20 17:15:48.433: INFO: deleting failed: Delete "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/psp-csi-controller-driver-registrar-role-csi-mock-volumes-3661": dial tcp 100.26.139.144:443: connect: connection refused Jan 20 17:15:48.433: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-3661 Jan 20 17:15:48.474: INFO: deleting failed: Delete "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/csi-controller-resizer-role-csi-mock-volumes-3661": dial tcp 100.26.139.144:443: connect: connection refused Jan 20 17:15:48.474: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-3661 ERROR: get pod list in csi-mock-volumes-3661-1300: Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/csi-mock-volumes-3661-1300/pods": dial tcp 100.26.139.144:443: connect: connection refused Jan 20 17:15:48.514: INFO: deleting failed: Delete "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/csi-controller-snapshotter-role-csi-mock-volumes-3661": dial tcp 100.26.139.144:443: connect: connection refused Jan 20 17:15:48.514: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-3661 ERROR: get pod list in csi-mock-volumes-3661-1300: Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/csi-mock-volumes-3661-1300/pods": dial tcp 100.26.139.144:443: connect: connection refused ERROR: get pod list in csi-mock-volumes-3661-1300: Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/csi-mock-volumes-3661-1300/pods": dial tcp 100.26.139.144:443: connect: connection refused Jan 20 17:15:48.567: INFO: deleting failed: Delete "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/apis/storage.k8s.io/v1/storageclasses/csi-mock-sc-csi-mock-volumes-3661": dial tcp 100.26.139.144:443: connect: connection refused Jan 20 17:15:48.567: INFO: deleting *v1.StatefulSet: csi-mock-volumes-3661-1300/csi-mockplugin Jan 20 17:15:48.606: INFO: deleting failed: Delete "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/apis/apps/v1/namespaces/csi-mock-volumes-3661-1300/statefulsets/csi-mockplugin": dial tcp 100.26.139.144:443: connect: connection refused Jan 20 17:15:48.606: INFO: deleting *v1.StatefulSet: csi-mock-volumes-3661-1300/csi-mockplugin-attacher ERROR: get pod list in csi-mock-volumes-3661-1300: Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/csi-mock-volumes-3661-1300/pods": dial tcp 100.26.139.144:443: connect: connection refused ERROR: get pod list in csi-mock-volumes-3661-1300: Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/csi-mock-volumes-3661-1300/pods": dial tcp 100.26.139.144:443: connect: connection refused Jan 20 17:15:48.651: INFO: deleting failed: Delete "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/apis/apps/v1/namespaces/csi-mock-volumes-3661-1300/statefulsets/csi-mockplugin-attacher": dial tcp 100.26.139.144:443: connect: connection refused Jan 20 17:15:48.651: INFO: deleting *v1.StatefulSet: csi-mock-volumes-3661-1300/csi-mockplugin-resizer Jan 20 17:15:48.692: INFO: deleting failed: Delete "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/apis/apps/v1/namespaces/csi-mock-volumes-3661-1300/statefulsets/csi-mockplugin-resizer": dial tcp 100.26.139.144:443: connect: connection refused ERROR: get pod list in csi-mock-volumes-3661-1300: Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/csi-mock-volumes-3661-1300/pods": dial tcp 100.26.139.144:443: connect: connection refused �[1mSTEP:�[0m deleting the driver namespace: csi-mock-volumes-3661-1300 �[38;5;243m01/20/23 17:15:48.692�[0m ERROR: get pod list in csi-mock-volumes-3661-1300: Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/csi-mock-volumes-3661-1300/pods": dial tcp 100.26.139.144:443: connect: connection refused Jan 20 17:15:48.758: INFO: error deleting namespace csi-mock-volumes-3661-1300: Delete "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/csi-mock-volumes-3661-1300": dial tcp 100.26.139.144:443: connect: connection refused [DeferCleanup (Each)] [sig-storage] CSI mock volume test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] CSI mock volume dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] CSI mock volume tear down framework | framework.go:193 �[1mSTEP:�[0m Destroying namespace "csi-mock-volumes-3661-1300" for this suite. �[38;5;243m01/20/23 17:15:48.759�[0m ERROR: get pod list in csi-mock-volumes-3661-1300: Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/csi-mock-volumes-3661-1300/pods": dial tcp 100.26.139.144:443: connect: connection refused �[1mSTEP:�[0m Collecting events from namespace "csi-mock-volumes-3661-1300". �[38;5;243m01/20/23 17:15:48.796�[0m Jan 20 17:15:48.837: INFO: Unexpected error: failed to list events in namespace "csi-mock-volumes-3661-1300": <*url.Error | 0xc0028dce70>: { Op: "Get", URL: "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/csi-mock-volumes-3661-1300/events", Err: <*net.OpError | 0xc0028e64b0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc00394e810>{ IP: [100, 26, 139, 144], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc000de03e0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Jan 20 17:15:48.838: FAIL: failed to list events in namespace "csi-mock-volumes-3661-1300": Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/csi-mock-volumes-3661-1300/events": dial tcp 100.26.139.144:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework/debug.dumpEventsInNamespace(0xc003c47da0, {0xc004bdd180, 0x1a}) test/e2e/framework/debug/dump.go:44 +0x191 k8s.io/kubernetes/test/e2e/framework/debug.DumpAllNamespaceInfo({0x8022ee8, 0xc003a5f1e0}, {0xc004bdd180, 0x1a}) test/e2e/framework/debug/dump.go:62 +0x8d k8s.io/kubernetes/test/e2e/framework/debug/init.init.0.func1.1(0xc004d957c0?, {0xc004bdd180?, 0x2?}) test/e2e/framework/debug/init/init.go:34 +0x32 k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach.func1() test/e2e/framework/framework.go:341 +0x82d k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc000f944b0) test/e2e/framework/framework.go:383 +0x1ca reflect.Value.call({0x662c060?, 0xc00100dfb0?, 0xc003be5fb0?}, {0x75bb752, 0x4}, {0xae7b420, 0x0, 0xc000f2fdc8?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x662c060?, 0xc00100dfb0?, 0x2946afc?}, {0xae7b420?, 0xc003be5f80?, 0xc0028cc320?}) /usr/local/go/src/reflect/value.go:368 +0xbc
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-storage\]\sCSI\smock\svolume\sCSI\sVolume\sexpansion\sshould\snot\sexpand\svolume\sif\sresizingOnDriver\=off\,\sresizingOnSC\=on$'
test/e2e/storage/csi_mock_volume.go:360 k8s.io/kubernetes/test/e2e/storage.glob..func2.6() test/e2e/storage/csi_mock_volume.go:360 +0xad1 k8s.io/kubernetes/test/e2e/storage.glob..func2.11.1() test/e2e/storage/csi_mock_volume.go:740 +0xa4f There were additional failures detected after the initial failure: [FAILED] Jan 20 17:15:48.786: failed to list events in namespace "csi-mock-volumes-5223": Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/csi-mock-volumes-5223/events": dial tcp 100.26.139.144:443: connect: connection refused In [DeferCleanup (Each)] at: test/e2e/framework/debug/dump.go:44from junit_01.xml
[BeforeEach] [sig-storage] CSI mock volume set up framework | framework.go:178 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/20/23 17:13:21.413�[0m Jan 20 17:13:21.413: INFO: >>> kubeConfig: /root/.kube/config �[1mSTEP:�[0m Building a namespace api object, basename csi-mock-volumes �[38;5;243m01/20/23 17:13:21.414�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/20/23 17:13:21.52�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/20/23 17:13:21.607�[0m [BeforeEach] [sig-storage] CSI mock volume test/e2e/framework/metrics/init/init.go:31 [It] should not expand volume if resizingOnDriver=off, resizingOnSC=on test/e2e/storage/csi_mock_volume.go:700 �[1mSTEP:�[0m Building a driver namespace object, basename csi-mock-volumes-5223 �[38;5;243m01/20/23 17:13:21.687�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/20/23 17:13:21.846�[0m �[1mSTEP:�[0m deploying csi mock driver �[38;5;243m01/20/23 17:13:21.907�[0m Jan 20 17:13:22.067: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5223-6536/csi-attacher Jan 20 17:13:22.098: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-5223 Jan 20 17:13:22.098: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-5223 Jan 20 17:13:22.131: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-5223 Jan 20 17:13:22.169: INFO: creating *v1.Role: csi-mock-volumes-5223-6536/external-attacher-cfg-csi-mock-volumes-5223 Jan 20 17:13:22.201: INFO: creating *v1.RoleBinding: csi-mock-volumes-5223-6536/csi-attacher-role-cfg Jan 20 17:13:22.234: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5223-6536/csi-provisioner Jan 20 17:13:22.265: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-5223 Jan 20 17:13:22.265: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-5223 Jan 20 17:13:22.297: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-5223 Jan 20 17:13:22.330: INFO: creating *v1.Role: csi-mock-volumes-5223-6536/external-provisioner-cfg-csi-mock-volumes-5223 Jan 20 17:13:22.361: INFO: creating *v1.RoleBinding: csi-mock-volumes-5223-6536/csi-provisioner-role-cfg Jan 20 17:13:22.392: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5223-6536/csi-resizer Jan 20 17:13:22.423: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-5223 Jan 20 17:13:22.423: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-5223 Jan 20 17:13:22.454: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-5223 Jan 20 17:13:22.485: INFO: creating *v1.Role: csi-mock-volumes-5223-6536/external-resizer-cfg-csi-mock-volumes-5223 Jan 20 17:13:22.516: INFO: creating *v1.RoleBinding: csi-mock-volumes-5223-6536/csi-resizer-role-cfg Jan 20 17:13:22.548: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5223-6536/csi-snapshotter Jan 20 17:13:22.579: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-5223 Jan 20 17:13:22.579: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-5223 Jan 20 17:13:22.610: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-5223 Jan 20 17:13:22.642: INFO: creating *v1.Role: csi-mock-volumes-5223-6536/external-snapshotter-leaderelection-csi-mock-volumes-5223 Jan 20 17:13:22.673: INFO: creating *v1.RoleBinding: csi-mock-volumes-5223-6536/external-snapshotter-leaderelection Jan 20 17:13:22.704: INFO: creating *v1.ServiceAccount: csi-mock-volumes-5223-6536/csi-mock Jan 20 17:13:22.737: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-5223 Jan 20 17:13:22.768: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-5223 Jan 20 17:13:22.799: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-5223 Jan 20 17:13:22.832: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-5223 Jan 20 17:13:22.863: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-5223 Jan 20 17:13:22.894: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-5223 Jan 20 17:13:22.927: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-5223 Jan 20 17:13:22.961: INFO: creating *v1.StatefulSet: csi-mock-volumes-5223-6536/csi-mockplugin Jan 20 17:13:22.995: INFO: creating *v1.StatefulSet: csi-mock-volumes-5223-6536/csi-mockplugin-attacher Jan 20 17:13:23.032: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-5223 to register on node i-0f775d321e19704c3 �[1mSTEP:�[0m Creating pod �[38;5;243m01/20/23 17:13:39.441�[0m Jan 20 17:13:39.473: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Jan 20 17:13:39.511: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-8wxkz] to have phase Bound Jan 20 17:13:39.551: INFO: PersistentVolumeClaim pvc-8wxkz found but phase is Pending instead of Bound. Jan 20 17:13:41.581: INFO: PersistentVolumeClaim pvc-8wxkz found and phase=Bound (2.069153415s) Jan 20 17:13:41.675: INFO: Waiting up to 5m0s for pod "pvc-volume-tester-v7khp" in namespace "csi-mock-volumes-5223" to be "running" Jan 20 17:13:41.705: INFO: Pod "pvc-volume-tester-v7khp": Phase="Pending", Reason="", readiness=false. Elapsed: 30.218696ms Jan 20 17:13:43.736: INFO: Pod "pvc-volume-tester-v7khp": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060735901s Jan 20 17:13:45.737: INFO: Pod "pvc-volume-tester-v7khp": Phase="Pending", Reason="", readiness=false. Elapsed: 4.061479048s Jan 20 17:13:47.736: INFO: Pod "pvc-volume-tester-v7khp": Phase="Pending", Reason="", readiness=false. Elapsed: 6.061147066s Jan 20 17:13:49.736: INFO: Pod "pvc-volume-tester-v7khp": Phase="Pending", Reason="", readiness=false. Elapsed: 8.061339644s Jan 20 17:13:51.737: INFO: Pod "pvc-volume-tester-v7khp": Phase="Pending", Reason="", readiness=false. Elapsed: 10.061888723s Jan 20 17:13:53.736: INFO: Pod "pvc-volume-tester-v7khp": Phase="Pending", Reason="", readiness=false. Elapsed: 12.0610187s Jan 20 17:13:55.739: INFO: Pod "pvc-volume-tester-v7khp": Phase="Running", Reason="", readiness=true. Elapsed: 14.063844969s Jan 20 17:13:55.739: INFO: Pod "pvc-volume-tester-v7khp" satisfied condition "running" �[1mSTEP:�[0m Expanding current pvc �[38;5;243m01/20/23 17:13:55.739�[0m ERROR: get pod list in csi-mock-volumes-5223-6536: Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/csi-mock-volumes-5223-6536/pods": dial tcp 100.26.139.144:443: connect: connection refused ERROR: get pod list in csi-mock-volumes-5223-6536: Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/csi-mock-volumes-5223-6536/pods": dial tcp 100.26.139.144:443: connect: connection refused ERROR: get pod list in csi-mock-volumes-5223-6536: Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/csi-mock-volumes-5223-6536/pods": dial tcp 100.26.139.144:443: connect: connection refused ERROR: get pod list in csi-mock-volumes-5223-6536: Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/csi-mock-volumes-5223-6536/pods": dial tcp 100.26.139.144:443: connect: connection refused ERROR: get pod list in csi-mock-volumes-5223-6536: Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/csi-mock-volumes-5223-6536/pods": dial tcp 100.26.139.144:443: connect: connection refused ERROR: get pod list in csi-mock-volumes-5223-6536: Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/csi-mock-volumes-5223-6536/pods": dial tcp 100.26.139.144:443: connect: connection refused ERROR: get pod list in csi-mock-volumes-5223-6536: Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/csi-mock-volumes-5223-6536/pods": dial tcp 100.26.139.144:443: connect: connection refused ERROR: get pod list in csi-mock-volumes-5223-6536: Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/csi-mock-volumes-5223-6536/pods": dial tcp 100.26.139.144:443: connect: connection refused ERROR: get pod list in csi-mock-volumes-5223-6536: Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/csi-mock-volumes-5223-6536/pods": dial tcp 100.26.139.144:443: connect: connection refused ERROR: get pod list in csi-mock-volumes-5223-6536: Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/csi-mock-volumes-5223-6536/pods": dial tcp 100.26.139.144:443: connect: connection refused ERROR: get pod list in csi-mock-volumes-5223-6536: Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/csi-mock-volumes-5223-6536/pods": dial tcp 100.26.139.144:443: connect: connection refused ERROR: get pod list in csi-mock-volumes-5223-6536: Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/csi-mock-volumes-5223-6536/pods": dial tcp 100.26.139.144:443: connect: connection refused ERROR: get pod list in csi-mock-volumes-5223-6536: Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/csi-mock-volumes-5223-6536/pods": dial tcp 100.26.139.144:443: connect: connection refused ERROR: get pod list in csi-mock-volumes-5223-6536: Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/csi-mock-volumes-5223-6536/pods": dial tcp 100.26.139.144:443: connect: connection refused ERROR: get pod list in csi-mock-volumes-5223-6536: Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/csi-mock-volumes-5223-6536/pods": dial tcp 100.26.139.144:443: connect: connection refused ERROR: get pod list in csi-mock-volumes-5223-6536: Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/csi-mock-volumes-5223-6536/pods": dial tcp 100.26.139.144:443: connect: connection refused ERROR: get pod list in csi-mock-volumes-5223-6536: Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/csi-mock-volumes-5223-6536/pods": dial tcp 100.26.139.144:443: connect: connection refused ERROR: get pod list in csi-mock-volumes-5223-6536: Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/csi-mock-volumes-5223-6536/pods": dial tcp 100.26.139.144:443: connect: connection refused ERROR: get pod list in csi-mock-volumes-5223-6536: Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/csi-mock-volumes-5223-6536/pods": dial tcp 100.26.139.144:443: connect: connection refused ERROR: get pod list in csi-mock-volumes-5223-6536: Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/csi-mock-volumes-5223-6536/pods": dial tcp 100.26.139.144:443: connect: connection refused ERROR: get pod list in csi-mock-volumes-5223-6536: Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/csi-mock-volumes-5223-6536/pods": dial tcp 100.26.139.144:443: connect: connection refused �[1mSTEP:�[0m Deleting pod pvc-volume-tester-v7khp �[38;5;243m01/20/23 17:15:48.56�[0m Jan 20 17:15:48.560: INFO: Deleting pod "pvc-volume-tester-v7khp" in namespace "csi-mock-volumes-5223" ERROR: get pod list in csi-mock-volumes-5223-6536: Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/csi-mock-volumes-5223-6536/pods": dial tcp 100.26.139.144:443: connect: connection refused �[1mSTEP:�[0m Deleting claim pvc-8wxkz �[38;5;243m01/20/23 17:15:48.607�[0m ERROR: get pod list in csi-mock-volumes-5223-6536: Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/csi-mock-volumes-5223-6536/pods": dial tcp 100.26.139.144:443: connect: connection refused �[1mSTEP:�[0m Deleting storageclass csi-mock-volumes-5223-scf7bgl �[38;5;243m01/20/23 17:15:48.657�[0m ERROR: get pod list in csi-mock-volumes-5223-6536: Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/csi-mock-volumes-5223-6536/pods": dial tcp 100.26.139.144:443: connect: connection refused ERROR: get pod list in csi-mock-volumes-5223-6536: Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/csi-mock-volumes-5223-6536/pods": dial tcp 100.26.139.144:443: connect: connection refused �[1mSTEP:�[0m Cleaning up resources �[38;5;243m01/20/23 17:15:48.705�[0m Jan 20 17:15:48.705: INFO: Unexpected error: while cleaning up after test: <errors.aggregate | len:1, cap:1>: [ <*errors.errorString | 0xc00158e5d0>{ s: "pod Delete API error: Delete \"https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/csi-mock-volumes-5223/pods/pvc-volume-tester-v7khp\": dial tcp 100.26.139.144:443: connect: connection refused", }, ] Jan 20 17:15:48.706: FAIL: while cleaning up after test: pod Delete API error: Delete "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/csi-mock-volumes-5223/pods/pvc-volume-tester-v7khp": dial tcp 100.26.139.144:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/storage.glob..func2.6() test/e2e/storage/csi_mock_volume.go:360 +0xad1 k8s.io/kubernetes/test/e2e/storage.glob..func2.11.1() test/e2e/storage/csi_mock_volume.go:740 +0xa4f [AfterEach] [sig-storage] CSI mock volume test/e2e/framework/node/init/init.go:32 Jan 20 17:15:48.709: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready ERROR: get pod list in csi-mock-volumes-5223-6536: Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/csi-mock-volumes-5223-6536/pods": dial tcp 100.26.139.144:443: connect: connection refused [DeferCleanup (Each)] CSI Volume expansion test/e2e/storage/drivers/csi.go:699 �[1mSTEP:�[0m deleting the test namespace: csi-mock-volumes-5223 �[38;5;243m01/20/23 17:15:48.747�[0m �[1mSTEP:�[0m Collecting events from namespace "csi-mock-volumes-5223". �[38;5;243m01/20/23 17:15:48.747�[0m ERROR: get pod list in csi-mock-volumes-5223-6536: Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/csi-mock-volumes-5223-6536/pods": dial tcp 100.26.139.144:443: connect: connection refused Jan 20 17:15:48.786: INFO: Unexpected error: failed to list events in namespace "csi-mock-volumes-5223": <*url.Error | 0xc001e1be90>: { Op: "Get", URL: "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/csi-mock-volumes-5223/events", Err: <*net.OpError | 0xc001c04c30>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc001e1ba70>{ IP: [100, 26, 139, 144], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc0011676e0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Jan 20 17:15:48.786: FAIL: failed to list events in namespace "csi-mock-volumes-5223": Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/csi-mock-volumes-5223/events": dial tcp 100.26.139.144:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework/debug.dumpEventsInNamespace(0xc002ce06c8, {0xc003c30390, 0x15}) test/e2e/framework/debug/dump.go:44 +0x191 k8s.io/kubernetes/test/e2e/framework/debug.DumpAllNamespaceInfo({0x8022ee8, 0xc003ca2680}, {0xc003c30390, 0x15}) test/e2e/framework/debug/dump.go:62 +0x8d k8s.io/kubernetes/test/e2e/framework/debug/init.init.0.func1.1(0xc0039f45e0?, {0xc003c30390?, 0x2?}) test/e2e/framework/debug/init/init.go:34 +0x32 k8s.io/kubernetes/test/e2e/framework.(*Framework).DeleteNamespace(0xc000c04d20?, {0xc003c30390?, 0x15?}) test/e2e/framework/framework.go:412 +0x1c2 k8s.io/kubernetes/test/e2e/storage/drivers.generateDriverCleanupFunc.func1.1() test/e2e/storage/drivers/csi.go:1007 +0x25 k8s.io/kubernetes/test/e2e/storage/drivers.tryFunc(0xc001c41740?) test/e2e/storage/drivers/csi.go:992 +0x6d k8s.io/kubernetes/test/e2e/storage/drivers.generateDriverCleanupFunc.func1() test/e2e/storage/drivers/csi.go:1007 +0x11a k8s.io/kubernetes/test/e2e/storage/drivers.(*mockCSIDriver).PrepareTest.func4() test/e2e/storage/drivers/csi.go:701 +0x2e reflect.Value.call({0x662c060?, 0xc0015c0fa8?, 0xc003729fb0?}, {0x75bb752, 0x4}, {0xae7b420, 0x0, 0x0?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x662c060?, 0xc0015c0fa8?, 0x0?}, {0xae7b420?, 0x5?, 0xc00198ef00?}) /usr/local/go/src/reflect/value.go:368 +0xbc ERROR: get pod list in csi-mock-volumes-5223-6536: Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/csi-mock-volumes-5223-6536/pods": dial tcp 100.26.139.144:443: connect: connection refused Jan 20 17:15:48.827: INFO: error deleting namespace csi-mock-volumes-5223: Delete "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/csi-mock-volumes-5223": dial tcp 100.26.139.144:443: connect: connection refused �[1mSTEP:�[0m uninstalling csi mock driver �[38;5;243m01/20/23 17:15:48.827�[0m Jan 20 17:15:48.827: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5223-6536/csi-attacher Jan 20 17:15:48.866: INFO: deleting failed: Delete "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/csi-mock-volumes-5223-6536/serviceaccounts/csi-attacher": dial tcp 100.26.139.144:443: connect: connection refused Jan 20 17:15:48.866: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-5223 ERROR: get pod list in csi-mock-volumes-5223-6536: Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/csi-mock-volumes-5223-6536/pods": dial tcp 100.26.139.144:443: connect: connection refused Jan 20 17:15:48.906: INFO: deleting failed: Delete "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/apis/rbac.authorization.k8s.io/v1/clusterroles/external-attacher-runner-csi-mock-volumes-5223": dial tcp 100.26.139.144:443: connect: connection refused Jan 20 17:15:48.906: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-5223 ERROR: get pod list in csi-mock-volumes-5223-6536: Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/csi-mock-volumes-5223-6536/pods": dial tcp 100.26.139.144:443: connect: connection refused Jan 20 17:15:48.945: INFO: deleting failed: Delete "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/csi-attacher-role-csi-mock-volumes-5223": dial tcp 100.26.139.144:443: connect: connection refused Jan 20 17:15:48.945: INFO: deleting *v1.Role: csi-mock-volumes-5223-6536/external-attacher-cfg-csi-mock-volumes-5223 ERROR: get pod list in csi-mock-volumes-5223-6536: Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/csi-mock-volumes-5223-6536/pods": dial tcp 100.26.139.144:443: connect: connection refused ERROR: get pod list in csi-mock-volumes-5223-6536: Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/csi-mock-volumes-5223-6536/pods": dial tcp 100.26.139.144:443: connect: connection refused Jan 20 17:15:48.988: INFO: deleting failed: Delete "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/apis/rbac.authorization.k8s.io/v1/namespaces/csi-mock-volumes-5223-6536/roles/external-attacher-cfg-csi-mock-volumes-5223": dial tcp 100.26.139.144:443: connect: connection refused Jan 20 17:15:48.988: INFO: deleting *v1.RoleBinding: csi-mock-volumes-5223-6536/csi-attacher-role-cfg ERROR: get pod list in csi-mock-volumes-5223-6536: Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/csi-mock-volumes-5223-6536/pods": dial tcp 100.26.139.144:443: connect: connection refused Jan 20 17:15:49.029: INFO: deleting failed: Delete "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/apis/rbac.authorization.k8s.io/v1/namespaces/csi-mock-volumes-5223-6536/rolebindings/csi-attacher-role-cfg": dial tcp 100.26.139.144:443: connect: connection refused Jan 20 17:15:49.029: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5223-6536/csi-provisioner ERROR: get pod list in csi-mock-volumes-5223-6536: Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/csi-mock-volumes-5223-6536/pods": dial tcp 100.26.139.144:443: connect: connection refused Jan 20 17:15:49.072: INFO: deleting failed: Delete "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/csi-mock-volumes-5223-6536/serviceaccounts/csi-provisioner": dial tcp 100.26.139.144:443: connect: connection refused Jan 20 17:15:49.072: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-5223 Jan 20 17:15:49.108: INFO: deleting failed: Delete "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/apis/rbac.authorization.k8s.io/v1/clusterroles/external-provisioner-runner-csi-mock-volumes-5223": dial tcp 100.26.139.144:443: connect: connection refused Jan 20 17:15:49.108: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-5223 ERROR: get pod list in csi-mock-volumes-5223-6536: Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/csi-mock-volumes-5223-6536/pods": dial tcp 100.26.139.144:443: connect: connection refused Jan 20 17:15:49.150: INFO: deleting failed: Delete "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/csi-provisioner-role-csi-mock-volumes-5223": dial tcp 100.26.139.144:443: connect: connection refused Jan 20 17:15:49.150: INFO: deleting *v1.Role: csi-mock-volumes-5223-6536/external-provisioner-cfg-csi-mock-volumes-5223 ERROR: get pod list in csi-mock-volumes-5223-6536: Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/csi-mock-volumes-5223-6536/pods": dial tcp 100.26.139.144:443: connect: connection refused Jan 20 17:15:49.191: INFO: deleting failed: Delete "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/apis/rbac.authorization.k8s.io/v1/namespaces/csi-mock-volumes-5223-6536/roles/external-provisioner-cfg-csi-mock-volumes-5223": dial tcp 100.26.139.144:443: connect: connection refused Jan 20 17:15:49.191: INFO: deleting *v1.RoleBinding: csi-mock-volumes-5223-6536/csi-provisioner-role-cfg ERROR: get pod list in csi-mock-volumes-5223-6536: Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/csi-mock-volumes-5223-6536/pods": dial tcp 100.26.139.144:443: connect: connection refused ERROR: get pod list in csi-mock-volumes-5223-6536: Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/csi-mock-volumes-5223-6536/pods": dial tcp 100.26.139.144:443: i/o timeout Jan 20 17:16:19.192: INFO: deleting failed: Delete "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/apis/rbac.authorization.k8s.io/v1/namespaces/csi-mock-volumes-5223-6536/rolebindings/csi-provisioner-role-cfg": dial tcp 100.26.139.144:443: i/o timeout Jan 20 17:16:19.192: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5223-6536/csi-resizer Jan 20 17:16:19.326: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-5223 Jan 20 17:16:19.366: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-5223 Jan 20 17:16:19.415: INFO: deleting *v1.Role: csi-mock-volumes-5223-6536/external-resizer-cfg-csi-mock-volumes-5223 Jan 20 17:16:19.458: INFO: deleting *v1.RoleBinding: csi-mock-volumes-5223-6536/csi-resizer-role-cfg Jan 20 17:16:19.490: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5223-6536/csi-snapshotter Jan 20 17:16:19.525: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-5223 Jan 20 17:16:19.568: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-5223 Jan 20 17:16:19.607: INFO: deleting *v1.Role: csi-mock-volumes-5223-6536/external-snapshotter-leaderelection-csi-mock-volumes-5223 Jan 20 17:16:19.640: INFO: deleting *v1.RoleBinding: csi-mock-volumes-5223-6536/external-snapshotter-leaderelection Jan 20 17:16:19.672: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-5223-6536/csi-mock Jan 20 17:16:19.701: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-5223 Jan 20 17:16:19.734: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-5223 Jan 20 17:16:19.767: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-5223 Jan 20 17:16:19.806: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-5223 Jan 20 17:16:19.836: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-5223 Jan 20 17:16:19.871: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-5223 Jan 20 17:16:19.900: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-5223 Jan 20 17:16:19.931: INFO: deleting *v1.StatefulSet: csi-mock-volumes-5223-6536/csi-mockplugin Jan 20 17:16:19.965: INFO: deleting *v1.StatefulSet: csi-mock-volumes-5223-6536/csi-mockplugin-attacher �[1mSTEP:�[0m deleting the driver namespace: csi-mock-volumes-5223-6536 �[38;5;243m01/20/23 17:16:19.995�[0m �[1mSTEP:�[0m Collecting events from namespace "csi-mock-volumes-5223-6536". �[38;5;243m01/20/23 17:16:19.995�[0m �[1mSTEP:�[0m Found 19 events. �[38;5;243m01/20/23 17:16:20.028�[0m Jan 20 17:16:20.028: INFO: At 2023-01-20 17:13:23 +0000 UTC - event for csi-mockplugin: {statefulset-controller } SuccessfulCreate: create Pod csi-mockplugin-0 in StatefulSet csi-mockplugin successful Jan 20 17:16:20.028: INFO: At 2023-01-20 17:13:23 +0000 UTC - event for csi-mockplugin-0: {default-scheduler } Scheduled: Successfully assigned csi-mock-volumes-5223-6536/csi-mockplugin-0 to i-0f775d321e19704c3 Jan 20 17:16:20.028: INFO: At 2023-01-20 17:13:23 +0000 UTC - event for csi-mockplugin-attacher: {statefulset-controller } SuccessfulCreate: create Pod csi-mockplugin-attacher-0 in StatefulSet csi-mockplugin-attacher successful Jan 20 17:16:20.028: INFO: At 2023-01-20 17:13:23 +0000 UTC - event for csi-mockplugin-attacher-0: {default-scheduler } Scheduled: Successfully assigned csi-mock-volumes-5223-6536/csi-mockplugin-attacher-0 to i-0f775d321e19704c3 Jan 20 17:16:20.028: INFO: At 2023-01-20 17:13:26 +0000 UTC - event for csi-mockplugin-0: {kubelet i-0f775d321e19704c3} Pulling: Pulling image "registry.k8s.io/sig-storage/csi-provisioner:v3.3.0" Jan 20 17:16:20.028: INFO: At 2023-01-20 17:13:26 +0000 UTC - event for csi-mockplugin-attacher-0: {kubelet i-0f775d321e19704c3} Pulling: Pulling image "registry.k8s.io/sig-storage/csi-attacher:v4.0.0" Jan 20 17:16:20.028: INFO: At 2023-01-20 17:13:30 +0000 UTC - event for csi-mockplugin-0: {kubelet i-0f775d321e19704c3} Started: Started container csi-provisioner Jan 20 17:16:20.028: INFO: At 2023-01-20 17:13:30 +0000 UTC - event for csi-mockplugin-0: {kubelet i-0f775d321e19704c3} Started: Started container driver-registrar Jan 20 17:16:20.028: INFO: At 2023-01-20 17:13:30 +0000 UTC - event for csi-mockplugin-0: {kubelet i-0f775d321e19704c3} Pulling: Pulling image "registry.k8s.io/sig-storage/hostpathplugin:v1.9.0" Jan 20 17:16:20.028: INFO: At 2023-01-20 17:13:30 +0000 UTC - event for csi-mockplugin-0: {kubelet i-0f775d321e19704c3} Created: Created container driver-registrar Jan 20 17:16:20.028: INFO: At 2023-01-20 17:13:30 +0000 UTC - event for csi-mockplugin-0: {kubelet i-0f775d321e19704c3} Pulled: Container image "registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1" already present on machine Jan 20 17:16:20.028: INFO: At 2023-01-20 17:13:30 +0000 UTC - event for csi-mockplugin-0: {kubelet i-0f775d321e19704c3} Created: Created container csi-provisioner Jan 20 17:16:20.028: INFO: At 2023-01-20 17:13:30 +0000 UTC - event for csi-mockplugin-0: {kubelet i-0f775d321e19704c3} Pulled: Successfully pulled image "registry.k8s.io/sig-storage/csi-provisioner:v3.3.0" in 1.674036296s (4.334195934s including waiting) Jan 20 17:16:20.028: INFO: At 2023-01-20 17:13:31 +0000 UTC - event for csi-mockplugin-attacher-0: {kubelet i-0f775d321e19704c3} Pulled: Successfully pulled image "registry.k8s.io/sig-storage/csi-attacher:v4.0.0" in 1.439757316s (5.443142999s including waiting) Jan 20 17:16:20.028: INFO: At 2023-01-20 17:13:32 +0000 UTC - event for csi-mockplugin-attacher-0: {kubelet i-0f775d321e19704c3} Created: Created container csi-attacher Jan 20 17:16:20.028: INFO: At 2023-01-20 17:13:32 +0000 UTC - event for csi-mockplugin-attacher-0: {kubelet i-0f775d321e19704c3} Started: Started container csi-attacher Jan 20 17:16:20.028: INFO: At 2023-01-20 17:13:33 +0000 UTC - event for csi-mockplugin-0: {kubelet i-0f775d321e19704c3} Pulled: Successfully pulled image "registry.k8s.io/sig-storage/hostpathplugin:v1.9.0" in 199.927663ms (2.502880886s including waiting) Jan 20 17:16:20.028: INFO: At 2023-01-20 17:13:33 +0000 UTC - event for csi-mockplugin-0: {kubelet i-0f775d321e19704c3} Created: Created container mock Jan 20 17:16:20.028: INFO: At 2023-01-20 17:13:33 +0000 UTC - event for csi-mockplugin-0: {kubelet i-0f775d321e19704c3} Started: Started container mock Jan 20 17:16:20.057: INFO: POD NODE PHASE GRACE CONDITIONS Jan 20 17:16:20.057: INFO: csi-mockplugin-0 i-0f775d321e19704c3 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-20 17:13:23 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-01-20 17:13:34 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-20 17:13:34 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-20 17:13:23 +0000 UTC }] Jan 20 17:16:20.057: INFO: csi-mockplugin-attacher-0 i-0f775d321e19704c3 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-20 17:13:23 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-01-20 17:13:32 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-20 17:13:32 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-20 17:13:23 +0000 UTC }] Jan 20 17:16:20.057: INFO: Jan 20 17:16:20.260: INFO: Logging node info for node i-02cae73514916eb60 Jan 20 17:16:20.290: INFO: Node Info: &Node{ObjectMeta:{i-02cae73514916eb60 6d0a8063-275e-4cb5-a7e1-ecf07fb2d810 6824 0 2023-01-20 17:06:32 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:c5.large beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:us-east-1 failure-domain.beta.kubernetes.io/zone:us-east-1a kops.k8s.io/kops-controller-pki: kubernetes.io/arch:amd64 kubernetes.io/hostname:i-02cae73514916eb60 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node.kubernetes.io/exclude-from-external-load-balancers: node.kubernetes.io/instance-type:c5.large topology.ebs.csi.aws.com/zone:us-east-1a topology.kubernetes.io/region:us-east-1 topology.kubernetes.io/zone:us-east-1a] map[csi.volume.kubernetes.io/nodeid:{"ebs.csi.aws.com":"i-02cae73514916eb60"} flannel.alpha.coreos.com/backend-data:{"VNI":1,"VtepMAC":"16:5c:0b:0e:74:77"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:172.20.51.65 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-20 17:06:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}}} } {kube-controller-manager Update v1 2023-01-20 17:07:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.0.0/24\"":{}}}} } {protokube Update v1 2023-01-20 17:07:01 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kops.k8s.io/kops-controller-pki":{},"f:node-role.kubernetes.io/control-plane":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}} } {flanneld Update v1 2023-01-20 17:07:03 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {aws-cloud-controller-manager Update v1 2023-01-20 17:07:06 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:beta.kubernetes.io/instance-type":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:taints":{}}} } {aws-cloud-controller-manager Update v1 2023-01-20 17:07:06 +0000 UTC FieldsV1 {"f:status":{"f:addresses":{"k:{\"type\":\"ExternalDNS\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"ExternalIP\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"Hostname\"}":{"f:address":{}},"k:{\"type\":\"InternalDNS\"}":{".":{},"f:address":{},"f:type":{}}}}} status} {kubelet Update v1 2023-01-20 17:16:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{}}},"f:status":{"f:allocatable":{"f:memory":{}},"f:capacity":{"f:memory":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{},"f:kernelVersion":{},"f:osImage":{}}}} status}]},Spec:NodeSpec{PodCIDR:100.96.0.0/24,DoNotUseExternalID:,ProviderID:aws:///us-east-1a/i-02cae73514916eb60,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/control-plane,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[100.96.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{47441653760 0} {<nil>} 46329740Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3895427072 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{42697488314 0} {<nil>} 42697488314 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3790569472 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-20 17:07:03 +0000 UTC,LastTransitionTime:2023-01-20 17:07:03 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-20 17:16:15 +0000 UTC,LastTransitionTime:2023-01-20 17:06:30 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-20 17:16:15 +0000 UTC,LastTransitionTime:2023-01-20 17:06:30 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-20 17:16:15 +0000 UTC,LastTransitionTime:2023-01-20 17:06:30 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-20 17:16:15 +0000 UTC,LastTransitionTime:2023-01-20 17:16:15 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.51.65,},NodeAddress{Type:ExternalIP,Address:100.26.139.144,},NodeAddress{Type:InternalDNS,Address:i-02cae73514916eb60.ec2.internal,},NodeAddress{Type:Hostname,Address:i-02cae73514916eb60.ec2.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-100-26-139-144.compute-1.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec26604dd0f376256ae652d6e661c235,SystemUUID:ec26604d-d0f3-7625-6ae6-52d6e661c235,BootID:a089a900-b2da-4d1d-8de1-3fdf21e97305,KernelVersion:5.15.86-flatcar,OSImage:Flatcar Container Linux by Kinvolk 3446.1.0 (Oklo),ContainerRuntimeVersion:containerd://1.6.10,KubeletVersion:v1.26.1,KubeProxyVersion:v1.26.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/etcdadm/etcd-manager@sha256:66a453db625abb268f4b3bbefc5a34a171d81e6e8796cecca54cfd71775c77c4 registry.k8s.io/etcdadm/etcd-manager:v3.0.20221209],SizeBytes:231502799,},ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.26.1],SizeBytes:135178704,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.26.1],SizeBytes:124995897,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.26.1],SizeBytes:67205316,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.26.1],SizeBytes:57661752,},ContainerImage{Names:[registry.k8s.io/kops/kops-controller:1.26.0-beta.2],SizeBytes:43191763,},ContainerImage{Names:[registry.k8s.io/kops/dns-controller:1.26.0-beta.2],SizeBytes:42821714,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:f0c5de192d832e7c1daa6580d4a62e8fa6fc8eabc0917ae4cb7ed4d15e95b59e registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.1],SizeBytes:29725845,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:122bfb8c1edabb3c0edd63f06523e6940d958d19b3957dc7b1d6f81e9f1f6119 registry.k8s.io/sig-storage/csi-provisioner:v3.1.0],SizeBytes:23345856,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:9ebbf9f023e7b41ccee3d52afe39a89e3ddacdbb69269d583abfc25847cfd9e4 registry.k8s.io/sig-storage/csi-resizer:v1.4.0],SizeBytes:22381475,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:8b9c313c05f54fb04f8d430896f5f5904b6cb157df261501b29adc04d2b2dc7b registry.k8s.io/sig-storage/csi-attacher:v3.4.0],SizeBytes:22085298,},ContainerImage{Names:[docker.io/rancher/mirrored-flannelcni-flannel@sha256:c9786f434d4663c924aeca1a2e479786d63df0d56c5d6bd62a64915f81d62ff0 docker.io/rancher/mirrored-flannelcni-flannel:v0.19.2],SizeBytes:20503771,},ContainerImage{Names:[registry.k8s.io/provider-aws/cloud-controller-manager@sha256:fdeb61e3e42ecd9cca868d550ebdb88dd6341d9e91fcfa9a37e227dab2ad22cb registry.k8s.io/provider-aws/cloud-controller-manager:v1.26.0],SizeBytes:20154862,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:8240918,},ContainerImage{Names:[registry.k8s.io/kops/kube-apiserver-healthcheck:1.26.0-beta.2],SizeBytes:4965793,},ContainerImage{Names:[docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin@sha256:28d3a6be9f450282bf42e4dad143d41da23e3d91f66f19c01ee7fd21fd17cb2b docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.0],SizeBytes:3821285,},ContainerImage{Names:[registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 20 17:16:20.290: INFO: Logging kubelet events for node i-02cae73514916eb60 Jan 20 17:16:20.372: INFO: Logging pods the kubelet thinks is on node i-02cae73514916eb60 Jan 20 17:16:20.751: INFO: kube-scheduler-i-02cae73514916eb60 started at 2023-01-20 17:16:08 +0000 UTC (0+1 container statuses recorded) Jan 20 17:16:20.751: INFO: Container kube-scheduler ready: true, restart count 1 Jan 20 17:16:20.751: INFO: ebs-csi-node-lfls8 started at 2023-01-20 17:06:58 +0000 UTC (0+3 container statuses recorded) Jan 20 17:16:20.751: INFO: Container ebs-plugin ready: false, restart count 0 Jan 20 17:16:20.751: INFO: Container liveness-probe ready: false, restart count 0 Jan 20 17:16:20.751: INFO: Container node-driver-registrar ready: false, restart count 0 Jan 20 17:16:20.751: INFO: kube-flannel-ds-5nkqq started at 2023-01-20 17:06:58 +0000 UTC (2+1 container statuses recorded) Jan 20 17:16:20.751: INFO: Init container install-cni-plugin ready: true, restart count 0 Jan 20 17:16:20.751: INFO: Init container install-cni ready: true, restart count 0 Jan 20 17:16:20.751: INFO: Container kube-flannel ready: false, restart count 0 Jan 20 17:16:20.751: INFO: kops-controller-mqtlq started at 2023-01-20 17:07:01 +0000 UTC (0+1 container statuses recorded) Jan 20 17:16:20.751: INFO: Container kops-controller ready: true, restart count 0 Jan 20 17:16:20.751: INFO: etcd-manager-events-i-02cae73514916eb60 started at 2023-01-20 17:16:08 +0000 UTC (0+1 container statuses recorded) Jan 20 17:16:20.751: INFO: Container etcd-manager ready: true, restart count 1 Jan 20 17:16:20.751: INFO: kube-apiserver-i-02cae73514916eb60 started at 2023-01-20 17:16:08 +0000 UTC (0+2 container statuses recorded) Jan 20 17:16:20.751: INFO: Container healthcheck ready: true, restart count 1 Jan 20 17:16:20.751: INFO: Container kube-apiserver ready: true, restart count 2 Jan 20 17:16:20.751: INFO: kube-controller-manager-i-02cae73514916eb60 started at 2023-01-20 17:06:00 +0000 UTC (0+1 container statuses recorded) Jan 20 17:16:20.751: INFO: Container kube-controller-manager ready: false, restart count 3 Jan 20 17:16:20.751: INFO: kube-proxy-i-02cae73514916eb60 started at 2023-01-20 17:16:08 +0000 UTC (0+1 container statuses recorded) Jan 20 17:16:20.751: INFO: Container kube-proxy ready: true, restart count 1 Jan 20 17:16:20.751: INFO: dns-controller-74d4646d88-p7zxr started at 2023-01-20 17:07:01 +0000 UTC (0+1 container statuses recorded) Jan 20 17:16:20.751: INFO: Container dns-controller ready: true, restart count 1 Jan 20 17:16:20.751: INFO: ebs-csi-controller-c9fc69cf5-kn566 started at 2023-01-20 17:07:01 +0000 UTC (0+5 container statuses recorded) Jan 20 17:16:20.751: INFO: Container csi-attacher ready: false, restart count 1 Jan 20 17:16:20.751: INFO: Container csi-provisioner ready: false, restart count 1 Jan 20 17:16:20.751: INFO: Container csi-resizer ready: false, restart count 0 Jan 20 17:16:20.751: INFO: Container ebs-plugin ready: false, restart count 0 Jan 20 17:16:20.751: INFO: Container liveness-probe ready: false, restart count 0 Jan 20 17:16:20.751: INFO: aws-cloud-controller-manager-2qgs4 started at 2023-01-20 17:07:01 +0000 UTC (0+1 container statuses recorded) Jan 20 17:16:20.751: INFO: Container aws-cloud-controller-manager ready: true, restart count 0 Jan 20 17:16:20.751: INFO: etcd-manager-main-i-02cae73514916eb60 started at 2023-01-20 17:16:08 +0000 UTC (0+1 container statuses recorded) Jan 20 17:16:20.751: INFO: Container etcd-manager ready: true, restart count 1 Jan 20 17:16:21.276: INFO: Latency metrics for node i-02cae73514916eb60 Jan 20 17:16:21.276: INFO: Logging node info for node i-03af3dbca738ba168 Jan 20 17:16:21.308: INFO: Node Info: &Node{ObjectMeta:{i-03af3dbca738ba168 f2b83166-36e9-4e14-8fe3-7e4da5f5a758 6831 0 2023-01-20 17:07:52 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:us-east-1 failure-domain.beta.kubernetes.io/zone:us-east-1a kubernetes.io/arch:amd64 kubernetes.io/hostname:i-03af3dbca738ba168 kubernetes.io/os:linux node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:us-east-1a topology.hostpath.csi/node:i-03af3dbca738ba168 topology.kubernetes.io/region:us-east-1 topology.kubernetes.io/zone:us-east-1a] map[csi.volume.kubernetes.io/nodeid:{"ebs.csi.aws.com":"i-03af3dbca738ba168"} flannel.alpha.coreos.com/backend-data:{"VNI":1,"VtepMAC":"26:10:99:e2:a4:c5"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:172.20.58.114 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{aws-cloud-controller-manager Update v1 2023-01-20 17:07:52 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:beta.kubernetes.io/instance-type":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {aws-cloud-controller-manager Update v1 2023-01-20 17:07:52 +0000 UTC FieldsV1 {"f:status":{"f:addresses":{"k:{\"type\":\"ExternalDNS\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"ExternalIP\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"Hostname\"}":{"f:address":{}},"k:{\"type\":\"InternalDNS\"}":{".":{},"f:address":{},"f:type":{}}}}} status} {kops-controller Update v1 2023-01-20 17:07:52 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:node-role.kubernetes.io/node":{}}}} } {kubelet Update v1 2023-01-20 17:07:52 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}}} } {kube-controller-manager Update v1 2023-01-20 17:07:56 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.2.0/24\"":{}}}} } {flanneld Update v1 2023-01-20 17:08:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2023-01-20 17:15:02 +0000 UTC FieldsV1 {"f:status":{"f:volumesAttached":{}}} status} {kubelet Update v1 2023-01-20 17:16:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.2.0/24,DoNotUseExternalID:,ProviderID:aws:///us-east-1a/i-03af3dbca738ba168,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{47441653760 0} {<nil>} 46329740Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4054794240 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{42697488314 0} {<nil>} 42697488314 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3949936640 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-20 17:08:29 +0000 UTC,LastTransitionTime:2023-01-20 17:08:29 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-20 17:16:15 +0000 UTC,LastTransitionTime:2023-01-20 17:07:42 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-20 17:16:15 +0000 UTC,LastTransitionTime:2023-01-20 17:07:42 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-20 17:16:15 +0000 UTC,LastTransitionTime:2023-01-20 17:07:42 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-20 17:16:15 +0000 UTC,LastTransitionTime:2023-01-20 17:07:56 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.58.114,},NodeAddress{Type:ExternalIP,Address:54.92.220.56,},NodeAddress{Type:InternalDNS,Address:i-03af3dbca738ba168.ec2.internal,},NodeAddress{Type:Hostname,Address:i-03af3dbca738ba168.ec2.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-54-92-220-56.compute-1.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec2a474c9a9b98f9bdaf7a97ffdf305e,SystemUUID:ec2a474c-9a9b-98f9-bdaf-7a97ffdf305e,BootID:7a359fbe-a27d-4b83-b283-7431ad35b17d,KernelVersion:5.15.81-flatcar,OSImage:Flatcar Container Linux by Kinvolk 3432.1.0 (Oklo),ContainerRuntimeVersion:containerd://1.6.10,KubeletVersion:v1.26.1,KubeProxyVersion:v1.26.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.26.1],SizeBytes:67205316,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:f0c5de192d832e7c1daa6580d4a62e8fa6fc8eabc0917ae4cb7ed4d15e95b59e registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.1],SizeBytes:29725845,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[docker.io/rancher/mirrored-flannelcni-flannel@sha256:c9786f434d4663c924aeca1a2e479786d63df0d56c5d6bd62a64915f81d62ff0 docker.io/rancher/mirrored-flannelcni-flannel:v0.19.2],SizeBytes:20503771,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf registry.k8s.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:18651485,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:8240918,},ContainerImage{Names:[docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin@sha256:28d3a6be9f450282bf42e4dad143d41da23e3d91f66f19c01ee7fd21fd17cb2b docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.0],SizeBytes:3821285,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/ebs.csi.aws.com^vol-06925cf189e67e28f,DevicePath:,},},Config:nil,},} Jan 20 17:16:21.309: INFO: Logging kubelet events for node i-03af3dbca738ba168 Jan 20 17:16:21.357: INFO: Logging pods the kubelet thinks is on node i-03af3dbca738ba168 Jan 20 17:16:21.407: INFO: boom-server started at 2023-01-20 17:14:35 +0000 UTC (0+1 container statuses recorded) Jan 20 17:16:21.407: INFO: Container boom-server ready: true, restart count 0 Jan 20 17:16:21.407: INFO: kube-flannel-ds-6vmgt started at 2023-01-20 17:07:53 +0000 UTC (2+1 container statuses recorded) Jan 20 17:16:21.407: INFO: Init container install-cni-plugin ready: true, restart count 0 Jan 20 17:16:21.407: INFO: Init container install-cni ready: true, restart count 0 Jan 20 17:16:21.407: INFO: Container kube-flannel ready: true, restart count 1 Jan 20 17:16:21.407: INFO: service-proxy-disabled-x6wst started at 2023-01-20 17:14:55 +0000 UTC (0+1 container statuses recorded) Jan 20 17:16:21.407: INFO: Container service-proxy-disabled ready: true, restart count 0 Jan 20 17:16:21.407: INFO: pod-handle-http-request started at 2023-01-20 17:15:11 +0000 UTC (0+2 container statuses recorded) Jan 20 17:16:21.407: INFO: Container container-handle-http-request ready: true, restart count 0 Jan 20 17:16:21.407: INFO: Container container-handle-https-request ready: true, restart count 0 Jan 20 17:16:21.407: INFO: inline-volume-tester-m74pm started at 2023-01-20 17:13:29 +0000 UTC (0+1 container statuses recorded) Jan 20 17:16:21.407: INFO: Container csi-volume-tester ready: false, restart count 0 Jan 20 17:16:21.407: INFO: hostexec-i-03af3dbca738ba168-q6k7b started at 2023-01-20 17:14:55 +0000 UTC (0+1 container statuses recorded) Jan 20 17:16:21.407: INFO: Container agnhost-container ready: true, restart count 0 Jan 20 17:16:21.407: INFO: kube-proxy-i-03af3dbca738ba168 started at 2023-01-20 17:07:42 +0000 UTC (0+1 container statuses recorded) Jan 20 17:16:21.407: INFO: Container kube-proxy ready: true, restart count 0 Jan 20 17:16:21.407: INFO: ebs-csi-node-8nk5p started at 2023-01-20 17:07:53 +0000 UTC (0+3 container statuses recorded) Jan 20 17:16:21.407: INFO: Container ebs-plugin ready: true, restart count 0 Jan 20 17:16:21.407: INFO: Container liveness-probe ready: true, restart count 0 Jan 20 17:16:21.407: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 20 17:16:21.407: INFO: coredns-559769c974-6f8t8 started at 2023-01-20 17:08:35 +0000 UTC (0+1 container statuses recorded) Jan 20 17:16:21.407: INFO: Container coredns ready: true, restart count 0 Jan 20 17:16:21.407: INFO: service-proxy-toggled-zghmz started at 2023-01-20 17:15:04 +0000 UTC (0+1 container statuses recorded) Jan 20 17:16:21.407: INFO: Container service-proxy-toggled ready: true, restart count 0 Jan 20 17:16:21.407: INFO: hostexec-i-03af3dbca738ba168-4qz69 started at 2023-01-20 17:15:13 +0000 UTC (0+1 container statuses recorded) Jan 20 17:16:21.407: INFO: Container agnhost-container ready: true, restart count 0 Jan 20 17:16:21.407: INFO: netserver-0 started at 2023-01-20 17:15:26 +0000 UTC (0+1 container statuses recorded) Jan 20 17:16:21.407: INFO: Container webserver ready: true, restart count 0 Jan 20 17:16:21.619: INFO: Latency metrics for node i-03af3dbca738ba168 Jan 20 17:16:21.619: INFO: Logging node info for node i-0460dbd3e490039bb Jan 20 17:16:21.652: INFO: Node Info: &Node{ObjectMeta:{i-0460dbd3e490039bb 3ed25acd-2f33-4687-a606-3d5a944590c8 6837 0 2023-01-20 17:07:53 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:us-east-1 failure-domain.beta.kubernetes.io/zone:us-east-1a kubernetes.io/arch:amd64 kubernetes.io/hostname:i-0460dbd3e490039bb kubernetes.io/os:linux node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:us-east-1a topology.kubernetes.io/region:us-east-1 topology.kubernetes.io/zone:us-east-1a] map[csi.volume.kubernetes.io/nodeid:{"ebs.csi.aws.com":"i-0460dbd3e490039bb"} flannel.alpha.coreos.com/backend-data:{"VNI":1,"VtepMAC":"0a:dc:21:c8:4e:3e"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:172.20.44.83 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{aws-cloud-controller-manager Update v1 2023-01-20 17:07:53 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:beta.kubernetes.io/instance-type":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {aws-cloud-controller-manager Update v1 2023-01-20 17:07:53 +0000 UTC FieldsV1 {"f:status":{"f:addresses":{"k:{\"type\":\"ExternalDNS\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"ExternalIP\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"Hostname\"}":{"f:address":{}},"k:{\"type\":\"InternalDNS\"}":{".":{},"f:address":{},"f:type":{}}}}} status} {kops-controller Update v1 2023-01-20 17:07:53 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:node-role.kubernetes.io/node":{}}}} } {kubelet Update v1 2023-01-20 17:07:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}}} } {kube-controller-manager Update v1 2023-01-20 17:10:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.3.0/24\"":{}}}} } {flanneld Update v1 2023-01-20 17:11:02 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2023-01-20 17:15:25 +0000 UTC FieldsV1 {"f:status":{"f:volumesAttached":{}}} status} {kubelet Update v1 2023-01-20 17:16:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{},"f:memory":{}},"f:capacity":{"f:memory":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{},"f:kernelVersion":{},"f:osImage":{}},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.3.0/24,DoNotUseExternalID:,ProviderID:aws:///us-east-1a/i-0460dbd3e490039bb,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{47441653760 0} {<nil>} 46329740Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4054806528 0} {<nil>} 3959772Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{42697488314 0} {<nil>} 42697488314 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3949948928 0} {<nil>} 3857372Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-20 17:11:02 +0000 UTC,LastTransitionTime:2023-01-20 17:11:02 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-20 17:16:16 +0000 UTC,LastTransitionTime:2023-01-20 17:07:32 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-20 17:16:16 +0000 UTC,LastTransitionTime:2023-01-20 17:07:32 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-20 17:16:16 +0000 UTC,LastTransitionTime:2023-01-20 17:07:32 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-20 17:16:16 +0000 UTC,LastTransitionTime:2023-01-20 17:10:57 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.44.83,},NodeAddress{Type:ExternalIP,Address:3.85.92.171,},NodeAddress{Type:InternalDNS,Address:i-0460dbd3e490039bb.ec2.internal,},NodeAddress{Type:Hostname,Address:i-0460dbd3e490039bb.ec2.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-3-85-92-171.compute-1.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec214ec8f7aec9bca6997e12c5d9fa17,SystemUUID:ec214ec8-f7ae-c9bc-a699-7e12c5d9fa17,BootID:6958a09a-b123-4522-ba50-97e69196d1e0,KernelVersion:5.15.86-flatcar,OSImage:Flatcar Container Linux by Kinvolk 3446.1.0 (Oklo),ContainerRuntimeVersion:containerd://1.6.10,KubeletVersion:v1.26.1,KubeProxyVersion:v1.26.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c registry.k8s.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.26.1],SizeBytes:67205316,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:f0c5de192d832e7c1daa6580d4a62e8fa6fc8eabc0917ae4cb7ed4d15e95b59e registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.1],SizeBytes:29725845,},ContainerImage{Names:[docker.io/rancher/mirrored-flannelcni-flannel@sha256:c9786f434d4663c924aeca1a2e479786d63df0d56c5d6bd62a64915f81d62ff0 docker.io/rancher/mirrored-flannelcni-flannel:v0.19.2],SizeBytes:20503771,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:8240918,},ContainerImage{Names:[docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin@sha256:28d3a6be9f450282bf42e4dad143d41da23e3d91f66f19c01ee7fd21fd17cb2b docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.0],SizeBytes:3821285,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[kubernetes.io/csi/ebs.csi.aws.com^vol-0bfed00e78ca4b211],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/ebs.csi.aws.com^vol-0bfed00e78ca4b211,DevicePath:,},},Config:nil,},} Jan 20 17:16:21.653: INFO: Logging kubelet events for node i-0460dbd3e490039bb Jan 20 17:16:21.696: INFO: Logging pods the kubelet thinks is on node i-0460dbd3e490039bb Jan 20 17:16:21.753: INFO: kube-flannel-ds-q8m2b started at 2023-01-20 17:07:53 +0000 UTC (2+1 container statuses recorded) Jan 20 17:16:21.753: INFO: Init container install-cni-plugin ready: true, restart count 1 Jan 20 17:16:21.753: INFO: Init container install-cni ready: true, restart count 0 Jan 20 17:16:21.753: INFO: Container kube-flannel ready: true, restart count 2 Jan 20 17:16:21.753: INFO: netserver-1 started at 2023-01-20 17:15:26 +0000 UTC (0+1 container statuses recorded) Jan 20 17:16:21.753: INFO: Container webserver ready: true, restart count 0 Jan 20 17:16:21.753: INFO: kube-proxy-i-0460dbd3e490039bb started at 2023-01-20 17:07:33 +0000 UTC (0+1 container statuses recorded) Jan 20 17:16:21.753: INFO: Container kube-proxy ready: true, restart count 1 Jan 20 17:16:21.753: INFO: ebs-csi-node-kmj84 started at 2023-01-20 17:07:53 +0000 UTC (0+3 container statuses recorded) Jan 20 17:16:21.753: INFO: Container ebs-plugin ready: true, restart count 1 Jan 20 17:16:21.753: INFO: Container liveness-probe ready: true, restart count 1 Jan 20 17:16:21.753: INFO: Container node-driver-registrar ready: true, restart count 1 Jan 20 17:16:21.753: INFO: downwardapi-volume-65e507d7-2728-4f27-b145-837b0a794a2f started at 2023-01-20 17:15:24 +0000 UTC (0+1 container statuses recorded) Jan 20 17:16:21.753: INFO: Container client-container ready: false, restart count 0 Jan 20 17:16:21.753: INFO: test-runtimeclass-runtimeclass-8659-unconfigured-handler-8np4l started at 2023-01-20 17:15:25 +0000 UTC (0+1 container statuses recorded) Jan 20 17:16:21.753: INFO: Container test ready: false, restart count 0 Jan 20 17:16:21.753: INFO: test-grpc-46eabcb2-0c4a-4520-810d-ba498e0fcbea started at 2023-01-20 17:15:20 +0000 UTC (0+1 container statuses recorded) Jan 20 17:16:21.753: INFO: Container etcd ready: true, restart count 0 Jan 20 17:16:21.753: INFO: hostexec-i-0460dbd3e490039bb-gl7xm started at 2023-01-20 17:15:20 +0000 UTC (0+1 container statuses recorded) Jan 20 17:16:21.753: INFO: Container agnhost-container ready: true, restart count 0 Jan 20 17:16:21.753: INFO: test-pod-1 started at 2023-01-20 17:15:25 +0000 UTC (0+1 container statuses recorded) Jan 20 17:16:21.753: INFO: Container token-test ready: true, restart count 0 Jan 20 17:16:21.753: INFO: startup-04b7934a-c3e8-415c-ba2f-32e3d709e2f1 started at 2023-01-20 17:14:57 +0000 UTC (0+1 container statuses recorded) Jan 20 17:16:21.753: INFO: Container busybox ready: false, restart count 0 Jan 20 17:16:21.753: INFO: pod-d9b2c311-b86f-4135-a026-635f052e5073 started at 2023-01-20 17:15:13 +0000 UTC (0+1 container statuses recorded) Jan 20 17:16:21.753: INFO: Container write-pod ready: true, restart count 0 Jan 20 17:16:21.753: INFO: verify-service-down-host-exec-pod started at 2023-01-20 17:15:22 +0000 UTC (0+1 container statuses recorded) Jan 20 17:16:21.753: INFO: Container agnhost-container ready: true, restart count 0 Jan 20 17:16:21.753: INFO: simpletest.rc-jrszk started at <nil> (0+0 container statuses recorded) Jan 20 17:16:21.753: INFO: service-proxy-disabled-hc668 started at 2023-01-20 17:14:55 +0000 UTC (0+1 container statuses recorded) Jan 20 17:16:21.753: INFO: Container service-proxy-disabled ready: true, restart count 0 Jan 20 17:16:21.753: INFO: service-proxy-toggled-bvmzm started at 2023-01-20 17:15:04 +0000 UTC (0+1 container statuses recorded) Jan 20 17:16:21.753: INFO: Container service-proxy-toggled ready: true, restart count 0 Jan 20 17:16:22.268: INFO: Latency metrics for node i-0460dbd3e490039bb Jan 20 17:16:22.268: INFO: Logging node info for node i-048afc59cd0c5fa4a Jan 20 17:16:22.309: INFO: Node Info: &Node{ObjectMeta:{i-048afc59cd0c5fa4a 906bdaca-cfdb-4619-98d1-2751663efe41 6803 0 2023-01-20 17:07:51 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:us-east-1 failure-domain.beta.kubernetes.io/zone:us-east-1a kubernetes.io/arch:amd64 kubernetes.io/hostname:i-048afc59cd0c5fa4a kubernetes.io/os:linux node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:us-east-1a topology.hostpath.csi/node:i-048afc59cd0c5fa4a topology.kubernetes.io/region:us-east-1 topology.kubernetes.io/zone:us-east-1a] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-volumemode-4314":"i-048afc59cd0c5fa4a","csi-mock-csi-mock-volumes-3661":"i-048afc59cd0c5fa4a","ebs.csi.aws.com":"i-048afc59cd0c5fa4a"} flannel.alpha.coreos.com/backend-data:{"VNI":1,"VtepMAC":"9e:9c:df:5f:98:2b"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:172.20.41.86 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{aws-cloud-controller-manager Update v1 2023-01-20 17:07:51 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:beta.kubernetes.io/instance-type":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {aws-cloud-controller-manager Update v1 2023-01-20 17:07:51 +0000 UTC FieldsV1 {"f:status":{"f:addresses":{"k:{\"type\":\"ExternalDNS\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"ExternalIP\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"Hostname\"}":{"f:address":{}},"k:{\"type\":\"InternalDNS\"}":{".":{},"f:address":{},"f:type":{}}}}} status} {kops-controller Update v1 2023-01-20 17:07:51 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:node-role.kubernetes.io/node":{}}}} } {kubelet Update v1 2023-01-20 17:07:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}}} } {kube-controller-manager Update v1 2023-01-20 17:07:54 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.1.0/24\"":{}}}} } {flanneld Update v1 2023-01-20 17:08:27 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2023-01-20 17:15:20 +0000 UTC FieldsV1 {"f:status":{"f:volumesAttached":{}}} status} {kubelet Update v1 2023-01-20 17:16:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.1.0/24,DoNotUseExternalID:,ProviderID:aws:///us-east-1a/i-048afc59cd0c5fa4a,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{47441653760 0} {<nil>} 46329740Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4054786048 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{42697488314 0} {<nil>} 42697488314 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3949928448 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-20 17:08:27 +0000 UTC,LastTransitionTime:2023-01-20 17:08:27 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-20 17:16:15 +0000 UTC,LastTransitionTime:2023-01-20 17:07:30 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-20 17:16:15 +0000 UTC,LastTransitionTime:2023-01-20 17:07:30 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-20 17:16:15 +0000 UTC,LastTransitionTime:2023-01-20 17:07:30 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-20 17:16:15 +0000 UTC,LastTransitionTime:2023-01-20 17:07:54 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.41.86,},NodeAddress{Type:ExternalIP,Address:34.201.135.194,},NodeAddress{Type:InternalDNS,Address:i-048afc59cd0c5fa4a.ec2.internal,},NodeAddress{Type:Hostname,Address:i-048afc59cd0c5fa4a.ec2.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-34-201-135-194.compute-1.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec2886bb32c49932d355813f2015452a,SystemUUID:ec2886bb-32c4-9932-d355-813f2015452a,BootID:282ccca5-5996-4f4c-a14f-de0d630f9cd9,KernelVersion:5.15.81-flatcar,OSImage:Flatcar Container Linux by Kinvolk 3432.1.0 (Oklo),ContainerRuntimeVersion:containerd://1.6.10,KubeletVersion:v1.26.1,KubeProxyVersion:v1.26.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.26.1],SizeBytes:67205316,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:f0c5de192d832e7c1daa6580d4a62e8fa6fc8eabc0917ae4cb7ed4d15e95b59e registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.1],SizeBytes:29725845,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[docker.io/rancher/mirrored-flannelcni-flannel@sha256:c9786f434d4663c924aeca1a2e479786d63df0d56c5d6bd62a64915f81d62ff0 docker.io/rancher/mirrored-flannelcni-flannel:v0.19.2],SizeBytes:20503771,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:68d396900aeaa072c1f27289485fdac29834045a6f3ffe369bf389d830ef572d registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.6],SizeBytes:20293261,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:8240918,},ContainerImage{Names:[docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin@sha256:28d3a6be9f450282bf42e4dad143d41da23e3d91f66f19c01ee7fd21fd17cb2b docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.0],SizeBytes:3821285,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-volumemode-4314^03e380e6-98e6-11ed-a604-da05ea84c2a8,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/ebs.csi.aws.com^vol-0773002e9f6f99416,DevicePath:,},},Config:nil,},} Jan 20 17:16:22.309: INFO: Logging kubelet events for node i-048afc59cd0c5fa4a Jan 20 17:16:22.379: INFO: Logging pods the kubelet thinks is on node i-048afc59cd0c5fa4a Jan 20 17:16:22.444: INFO: kube-flannel-ds-hds7n started at 2023-01-20 17:07:51 +0000 UTC (2+1 container statuses recorded) Jan 20 17:16:22.444: INFO: Init container install-cni-plugin ready: true, restart count 0 Jan 20 17:16:22.444: INFO: Init container install-cni ready: true, restart count 0 Jan 20 17:16:22.444: INFO: Container kube-flannel ready: true, restart count 1 Jan 20 17:16:22.444: INFO: coredns-autoscaler-7cb5c5b969-kxr22 started at 2023-01-20 17:07:54 +0000 UTC (0+1 container statuses recorded) Jan 20 17:16:22.444: INFO: Container autoscaler ready: true, restart count 0 Jan 20 17:16:22.444: INFO: startup-script started at 2023-01-20 17:14:41 +0000 UTC (0+1 container statuses recorded) Jan 20 17:16:22.444: INFO: Container startup-script ready: true, restart count 0 Jan 20 17:16:22.444: INFO: netserver-2 started at 2023-01-20 17:15:26 +0000 UTC (0+1 container statuses recorded) Jan 20 17:16:22.444: INFO: Container webserver ready: false, restart count 0 Jan 20 17:16:22.444: INFO: ebs-csi-node-c9wzq started at 2023-01-20 17:07:51 +0000 UTC (0+3 container statuses recorded) Jan 20 17:16:22.444: INFO: Container ebs-plugin ready: true, restart count 0 Jan 20 17:16:22.444: INFO: Container liveness-probe ready: true, restart count 0 Jan 20 17:16:22.444: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 20 17:16:22.444: INFO: csi-mockplugin-0 started at 2023-01-20 17:14:54 +0000 UTC (0+3 container statuses recorded) Jan 20 17:16:22.444: INFO: Container csi-provisioner ready: true, restart count 0 Jan 20 17:16:22.444: INFO: Container driver-registrar ready: true, restart count 0 Jan 20 17:16:22.444: INFO: Container mock ready: true, restart count 0 Jan 20 17:16:22.444: INFO: inline-volume-tester-b9tcm started at 2023-01-20 17:13:40 +0000 UTC (0+1 container statuses recorded) Jan 20 17:16:22.444: INFO: Container csi-volume-tester ready: true, restart count 0 Jan 20 17:16:22.444: INFO: kube-proxy-i-048afc59cd0c5fa4a started at 2023-01-20 17:07:31 +0000 UTC (0+1 container statuses recorded) Jan 20 17:16:22.444: INFO: Container kube-proxy ready: true, restart count 0 Jan 20 17:16:22.444: INFO: csi-mockplugin-resizer-0 started at 2023-01-20 17:14:54 +0000 UTC (0+1 container statuses recorded) Jan 20 17:16:22.444: INFO: Container csi-resizer ready: true, restart count 0 Jan 20 17:16:22.444: INFO: csi-hostpathplugin-0 started at 2023-01-20 17:15:15 +0000 UTC (0+7 container statuses recorded) Jan 20 17:16:22.444: INFO: Container csi-attacher ready: true, restart count 0 Jan 20 17:16:22.444: INFO: Container csi-provisioner ready: true, restart count 0 Jan 20 17:16:22.444: INFO: Container csi-resizer ready: true, restart count 0 Jan 20 17:16:22.444: INFO: Container csi-snapshotter ready: true, restart count 0 Jan 20 17:16:22.444: INFO: Container hostpath ready: true, restart count 0 Jan 20 17:16:22.444: INFO: Container liveness-probe ready: true, restart count 0 Jan 20 17:16:22.444: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 20 17:16:22.444: INFO: pod-b4b7923a-daf8-4e09-8bc3-1eb6903a407b started at 2023-01-20 17:15:19 +0000 UTC (0+1 container statuses recorded) Jan 20 17:16:22.444: INFO: Container write-pod ready: true, restart count 0 Jan 20 17:16:22.444: INFO: deployment-74d7dd69db-5xdct started at 2023-01-20 17:14:00 +0000 UTC (0+1 container statuses recorded) Jan 20 17:16:22.444: INFO: Container nginx ready: false, restart count 0 Jan 20 17:16:22.444: INFO: csi-mockplugin-attacher-0 started at 2023-01-20 17:14:54 +0000 UTC (0+1 container statuses recorded) Jan 20 17:16:22.444: INFO: Container csi-attacher ready: true, restart count 0 Jan 20 17:16:22.444: INFO: coredns-559769c974-mkzlp started at 2023-01-20 17:07:54 +0000 UTC (0+1 container statuses recorded) Jan 20 17:16:22.444: INFO: Container coredns ready: true, restart count 0 Jan 20 17:16:22.677: INFO: Latency metrics for node i-048afc59cd0c5fa4a Jan 20 17:16:22.677: INFO: Logging node info for node i-0f775d321e19704c3 Jan 20 17:16:22.708: INFO: Node Info: &Node{ObjectMeta:{i-0f775d321e19704c3 19607256-f185-404f-84dd-0198c716bca7 5772 0 2023-01-20 17:07:53 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:us-east-1 failure-domain.beta.kubernetes.io/zone:us-east-1a kubernetes.io/arch:amd64 kubernetes.io/hostname:i-0f775d321e19704c3 kubernetes.io/os:linux node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:us-east-1a topology.hostpath.csi/node:i-0f775d321e19704c3 topology.kubernetes.io/region:us-east-1 topology.kubernetes.io/zone:us-east-1a] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-5223":"i-0f775d321e19704c3","ebs.csi.aws.com":"i-0f775d321e19704c3"} flannel.alpha.coreos.com/backend-data:{"VNI":1,"VtepMAC":"72:43:d6:40:e8:77"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:172.20.55.61 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-20 17:07:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}}} } {aws-cloud-controller-manager Update v1 2023-01-20 17:07:54 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:beta.kubernetes.io/instance-type":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {aws-cloud-controller-manager Update v1 2023-01-20 17:07:54 +0000 UTC FieldsV1 {"f:status":{"f:addresses":{"k:{\"type\":\"ExternalDNS\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"ExternalIP\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"Hostname\"}":{"f:address":{}},"k:{\"type\":\"InternalDNS\"}":{".":{},"f:address":{},"f:type":{}}}}} status} {kops-controller Update v1 2023-01-20 17:07:54 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:node-role.kubernetes.io/node":{}}}} } {kube-controller-manager Update v1 2023-01-20 17:09:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.4.0/24\"":{}}}} } {flanneld Update v1 2023-01-20 17:09:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2023-01-20 17:14:57 +0000 UTC FieldsV1 {"f:status":{"f:volumesAttached":{}}} status} {kubelet Update v1 2023-01-20 17:14:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{},"f:memory":{}},"f:capacity":{"f:memory":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{},"f:kernelVersion":{},"f:osImage":{}},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.4.0/24,DoNotUseExternalID:,ProviderID:aws:///us-east-1a/i-0f775d321e19704c3,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{47441653760 0} {<nil>} 46329740Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4054814720 0} {<nil>} 3959780Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{42697488314 0} {<nil>} 42697488314 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3949957120 0} {<nil>} 3857380Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-20 17:09:35 +0000 UTC,LastTransitionTime:2023-01-20 17:09:35 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-20 17:14:57 +0000 UTC,LastTransitionTime:2023-01-20 17:07:33 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-20 17:14:57 +0000 UTC,LastTransitionTime:2023-01-20 17:07:33 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-20 17:14:57 +0000 UTC,LastTransitionTime:2023-01-20 17:07:33 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-20 17:14:57 +0000 UTC,LastTransitionTime:2023-01-20 17:09:30 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.55.61,},NodeAddress{Type:ExternalIP,Address:3.93.201.229,},NodeAddress{Type:InternalDNS,Address:i-0f775d321e19704c3.ec2.internal,},NodeAddress{Type:Hostname,Address:i-0f775d321e19704c3.ec2.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-3-93-201-229.compute-1.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec2a4be20ed59f70fa8678b6d03004b4,SystemUUID:ec2a4be2-0ed5-9f70-fa86-78b6d03004b4,BootID:d3100caa-b833-4d03-b5c0-4cb4a8b87060,KernelVersion:5.15.86-flatcar,OSImage:Flatcar Container Linux by Kinvolk 3446.1.0 (Oklo),ContainerRuntimeVersion:containerd://1.6.10,KubeletVersion:v1.26.1,KubeProxyVersion:v1.26.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.26.1],SizeBytes:67205316,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:f0c5de192d832e7c1daa6580d4a62e8fa6fc8eabc0917ae4cb7ed4d15e95b59e registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.1],SizeBytes:29725845,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[docker.io/rancher/mirrored-flannelcni-flannel@sha256:c9786f434d4663c924aeca1a2e479786d63df0d56c5d6bd62a64915f81d62ff0 docker.io/rancher/mirrored-flannelcni-flannel:v0.19.2],SizeBytes:20503771,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:8240918,},ContainerImage{Names:[docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin@sha256:28d3a6be9f450282bf42e4dad143d41da23e3d91f66f19c01ee7fd21fd17cb2b docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.0],SizeBytes:3821285,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[kubernetes.io/csi/csi-mock-csi-mock-volumes-5223^c8620916-98e5-11ed-99ff-464f19649f33],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-mock-csi-mock-volumes-5223^c8620916-98e5-11ed-99ff-464f19649f33,DevicePath:,},},Config:nil,},} Jan 20 17:16:22.708: INFO: Logging kubelet events for node i-0f775d321e19704c3 Jan 20 17:16:22.766: INFO: Logging pods the kubelet thinks is on node i-0f775d321e19704c3 Jan 20 17:16:22.822: INFO: kube-proxy-i-0f775d321e19704c3 started at 2023-01-20 17:07:34 +0000 UTC (0+1 container statuses recorded) Jan 20 17:16:22.822: INFO: Container kube-proxy ready: true, restart count 1 Jan 20 17:16:22.822: INFO: test-pod-2 started at 2023-01-20 17:15:25 +0000 UTC (0+1 container statuses recorded) Jan 20 17:16:22.822: INFO: Container token-test ready: true, restart count 0 Jan 20 17:16:22.822: INFO: netserver-3 started at 2023-01-20 17:15:26 +0000 UTC (0+1 container statuses recorded) Jan 20 17:16:22.822: INFO: Container webserver ready: true, restart count 0 Jan 20 17:16:22.822: INFO: service-proxy-disabled-xwb98 started at 2023-01-20 17:14:55 +0000 UTC (0+1 container statuses recorded) Jan 20 17:16:22.822: INFO: Container service-proxy-disabled ready: true, restart count 0 Jan 20 17:16:22.822: INFO: simpletest.rc-9xd2k started at 2023-01-20 17:15:26 +0000 UTC (0+1 container statuses recorded) Jan 20 17:16:22.822: INFO: Container nginx ready: false, restart count 0 Jan 20 17:16:22.822: INFO: csi-mockplugin-0 started at 2023-01-20 17:13:23 +0000 UTC (0+3 container statuses recorded) Jan 20 17:16:22.822: INFO: Container csi-provisioner ready: true, restart count 0 Jan 20 17:16:22.822: INFO: Container driver-registrar ready: true, restart count 0 Jan 20 17:16:22.822: INFO: Container mock ready: true, restart count 0 Jan 20 17:16:22.822: INFO: ebs-csi-node-74dsh started at 2023-01-20 17:07:54 +0000 UTC (0+3 container statuses recorded) Jan 20 17:16:22.822: INFO: Container ebs-plugin ready: true, restart count 1 Jan 20 17:16:22.822: INFO: Container liveness-probe ready: true, restart count 1 Jan 20 17:16:22.822: INFO: Container node-driver-registrar ready: true, restart count 1 Jan 20 17:16:22.822: INFO: pvc-volume-tester-v7khp started at 2023-01-20 17:13:41 +0000 UTC (0+1 container statuses recorded) Jan 20 17:16:22.822: INFO: Container volume-tester ready: true, restart count 0 Jan 20 17:16:22.822: INFO: test-pod-3 started at 2023-01-20 17:15:25 +0000 UTC (0+1 container statuses recorded) Jan 20 17:16:22.822: INFO: Container token-test ready: true, restart count 0 Jan 20 17:16:22.822: INFO: csi-mockplugin-attacher-0 started at 2023-01-20 17:13:23 +0000 UTC (0+1 container statuses recorded) Jan 20 17:16:22.822: INFO: Container csi-attacher ready: true, restart count 0 Jan 20 17:16:22.822: INFO: test-cleanup-deployment-7698ff6f6b-65p4c started at 2023-01-20 17:15:22 +0000 UTC (0+1 container statuses recorded) Jan 20 17:16:22.822: INFO: Container agnhost ready: true, restart count 0 Jan 20 17:16:22.822: INFO: kube-flannel-ds-d9rm4 started at 2023-01-20 17:07:54 +0000 UTC (2+1 container statuses recorded) Jan 20 17:16:22.822: INFO: Init container install-cni-plugin ready: true, restart count 1 Jan 20 17:16:22.822: INFO: Init container install-cni ready: true, restart count 0 Jan 20 17:16:22.822: INFO: Container kube-flannel ready: true, restart count 2 Jan 20 17:16:22.822: INFO: service-proxy-toggled-8j48l started at 2023-01-20 17:15:04 +0000 UTC (0+1 container statuses recorded) Jan 20 17:16:22.822: INFO: Container service-proxy-toggled ready: true, restart count 0 Jan 20 17:16:22.995: INFO: Latency metrics for node i-0f775d321e19704c3 �[1mSTEP:�[0m Waiting for namespaces [csi-mock-volumes-5223-6536] to vanish �[38;5;243m01/20/23 17:16:23.06�[0m [DeferCleanup (Each)] [sig-storage] CSI mock volume test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] CSI mock volume dump namespaces | framework.go:196 �[1mSTEP:�[0m dump namespace information after failure �[38;5;243m01/20/23 17:17:01.092�[0m �[1mSTEP:�[0m Collecting events from namespace "csi-mock-volumes-5223". �[38;5;243m01/20/23 17:17:01.092�[0m �[1mSTEP:�[0m Found 11 events. �[38;5;243m01/20/23 17:17:01.124�[0m Jan 20 17:17:01.124: INFO: At 2023-01-20 17:13:39 +0000 UTC - event for pvc-8wxkz: {csi-mock-csi-mock-volumes-5223_csi-mockplugin-0_d9c90b6d-cbaf-431e-a3a7-a12815e13e83 } Provisioning: External provisioner is provisioning volume for claim "csi-mock-volumes-5223/pvc-8wxkz" Jan 20 17:17:01.125: INFO: At 2023-01-20 17:13:39 +0000 UTC - event for pvc-8wxkz: {persistentvolume-controller } ExternalProvisioning: waiting for a volume to be created, either by external provisioner "csi-mock-csi-mock-volumes-5223" or manually created by system administrator Jan 20 17:17:01.125: INFO: At 2023-01-20 17:13:39 +0000 UTC - event for pvc-8wxkz: {csi-mock-csi-mock-volumes-5223_csi-mockplugin-0_d9c90b6d-cbaf-431e-a3a7-a12815e13e83 } ProvisioningSucceeded: Successfully provisioned volume pvc-484dc25a-6535-4dc2-ae25-4da42321b047 Jan 20 17:17:01.125: INFO: At 2023-01-20 17:13:41 +0000 UTC - event for pvc-volume-tester-v7khp: {default-scheduler } Scheduled: Successfully assigned csi-mock-volumes-5223/pvc-volume-tester-v7khp to i-0f775d321e19704c3 Jan 20 17:17:01.125: INFO: At 2023-01-20 17:13:42 +0000 UTC - event for pvc-volume-tester-v7khp: {attachdetach-controller } SuccessfulAttachVolume: AttachVolume.Attach succeeded for volume "pvc-484dc25a-6535-4dc2-ae25-4da42321b047" Jan 20 17:17:01.125: INFO: At 2023-01-20 17:13:49 +0000 UTC - event for pvc-volume-tester-v7khp: {kubelet i-0f775d321e19704c3} Pulling: Pulling image "registry.k8s.io/pause:3.9" Jan 20 17:17:01.125: INFO: At 2023-01-20 17:13:50 +0000 UTC - event for pvc-volume-tester-v7khp: {kubelet i-0f775d321e19704c3} Pulled: Successfully pulled image "registry.k8s.io/pause:3.9" in 405.768769ms (405.773904ms including waiting) Jan 20 17:17:01.125: INFO: At 2023-01-20 17:13:50 +0000 UTC - event for pvc-volume-tester-v7khp: {kubelet i-0f775d321e19704c3} Created: Created container volume-tester Jan 20 17:17:01.125: INFO: At 2023-01-20 17:13:50 +0000 UTC - event for pvc-volume-tester-v7khp: {kubelet i-0f775d321e19704c3} Started: Started container volume-tester Jan 20 17:17:01.125: INFO: At 2023-01-20 17:13:55 +0000 UTC - event for pvc-8wxkz: {volume_expand } ExternalExpanding: Ignoring the PVC: didn't find a plugin capable of expanding the volume; waiting for an external controller to process this PVC. Jan 20 17:17:01.125: INFO: At 2023-01-20 17:16:53 +0000 UTC - event for pvc-8wxkz: {volume_expand } ExternalExpanding: Ignoring the PVC: didn't find a plugin capable of expanding the volume; waiting for an external controller to process this PVC. Jan 20 17:17:01.156: INFO: POD NODE PHASE GRACE CONDITIONS Jan 20 17:17:01.156: INFO: pvc-volume-tester-v7khp i-0f775d321e19704c3 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-20 17:13:41 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-01-20 17:13:51 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-20 17:13:51 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-20 17:13:41 +0000 UTC }] Jan 20 17:17:01.156: INFO: Jan 20 17:17:01.323: INFO: Logging node info for node i-02cae73514916eb60 Jan 20 17:17:01.352: INFO: Node Info: &Node{ObjectMeta:{i-02cae73514916eb60 6d0a8063-275e-4cb5-a7e1-ecf07fb2d810 6920 0 2023-01-20 17:06:32 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:c5.large beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:us-east-1 failure-domain.beta.kubernetes.io/zone:us-east-1a kops.k8s.io/kops-controller-pki: kubernetes.io/arch:amd64 kubernetes.io/hostname:i-02cae73514916eb60 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node.kubernetes.io/exclude-from-external-load-balancers: node.kubernetes.io/instance-type:c5.large topology.ebs.csi.aws.com/zone:us-east-1a topology.kubernetes.io/region:us-east-1 topology.kubernetes.io/zone:us-east-1a] map[csi.volume.kubernetes.io/nodeid:{"ebs.csi.aws.com":"i-02cae73514916eb60"} flannel.alpha.coreos.com/backend-data:{"VNI":1,"VtepMAC":"e6:28:1d:38:9c:ba"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:172.20.51.65 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-20 17:06:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}}} } {kube-controller-manager Update v1 2023-01-20 17:07:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.0.0/24\"":{}}}} } {protokube Update v1 2023-01-20 17:07:01 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kops.k8s.io/kops-controller-pki":{},"f:node-role.kubernetes.io/control-plane":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}} } {aws-cloud-controller-manager Update v1 2023-01-20 17:07:06 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:beta.kubernetes.io/instance-type":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:taints":{}}} } {aws-cloud-controller-manager Update v1 2023-01-20 17:07:06 +0000 UTC FieldsV1 {"f:status":{"f:addresses":{"k:{\"type\":\"ExternalDNS\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"ExternalIP\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"Hostname\"}":{"f:address":{}},"k:{\"type\":\"InternalDNS\"}":{".":{},"f:address":{},"f:type":{}}}}} status} {kubelet Update v1 2023-01-20 17:16:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{}}},"f:status":{"f:allocatable":{"f:memory":{}},"f:capacity":{"f:memory":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{},"f:kernelVersion":{},"f:osImage":{}}}} status} {flanneld Update v1 2023-01-20 17:16:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status}]},Spec:NodeSpec{PodCIDR:100.96.0.0/24,DoNotUseExternalID:,ProviderID:aws:///us-east-1a/i-02cae73514916eb60,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/control-plane,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[100.96.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{47441653760 0} {<nil>} 46329740Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3895427072 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{42697488314 0} {<nil>} 42697488314 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3790569472 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-20 17:16:23 +0000 UTC,LastTransitionTime:2023-01-20 17:16:23 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-20 17:16:15 +0000 UTC,LastTransitionTime:2023-01-20 17:06:30 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-20 17:16:15 +0000 UTC,LastTransitionTime:2023-01-20 17:06:30 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-20 17:16:15 +0000 UTC,LastTransitionTime:2023-01-20 17:06:30 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-20 17:16:15 +0000 UTC,LastTransitionTime:2023-01-20 17:16:15 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.51.65,},NodeAddress{Type:ExternalIP,Address:100.26.139.144,},NodeAddress{Type:InternalDNS,Address:i-02cae73514916eb60.ec2.internal,},NodeAddress{Type:Hostname,Address:i-02cae73514916eb60.ec2.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-100-26-139-144.compute-1.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec26604dd0f376256ae652d6e661c235,SystemUUID:ec26604d-d0f3-7625-6ae6-52d6e661c235,BootID:a089a900-b2da-4d1d-8de1-3fdf21e97305,KernelVersion:5.15.86-flatcar,OSImage:Flatcar Container Linux by Kinvolk 3446.1.0 (Oklo),ContainerRuntimeVersion:containerd://1.6.10,KubeletVersion:v1.26.1,KubeProxyVersion:v1.26.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/etcdadm/etcd-manager@sha256:66a453db625abb268f4b3bbefc5a34a171d81e6e8796cecca54cfd71775c77c4 registry.k8s.io/etcdadm/etcd-manager:v3.0.20221209],SizeBytes:231502799,},ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.26.1],SizeBytes:135178704,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.26.1],SizeBytes:124995897,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.26.1],SizeBytes:67205316,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.26.1],SizeBytes:57661752,},ContainerImage{Names:[registry.k8s.io/kops/kops-controller:1.26.0-beta.2],SizeBytes:43191763,},ContainerImage{Names:[registry.k8s.io/kops/dns-controller:1.26.0-beta.2],SizeBytes:42821714,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:f0c5de192d832e7c1daa6580d4a62e8fa6fc8eabc0917ae4cb7ed4d15e95b59e registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.1],SizeBytes:29725845,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:122bfb8c1edabb3c0edd63f06523e6940d958d19b3957dc7b1d6f81e9f1f6119 registry.k8s.io/sig-storage/csi-provisioner:v3.1.0],SizeBytes:23345856,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:9ebbf9f023e7b41ccee3d52afe39a89e3ddacdbb69269d583abfc25847cfd9e4 registry.k8s.io/sig-storage/csi-resizer:v1.4.0],SizeBytes:22381475,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:8b9c313c05f54fb04f8d430896f5f5904b6cb157df261501b29adc04d2b2dc7b registry.k8s.io/sig-storage/csi-attacher:v3.4.0],SizeBytes:22085298,},ContainerImage{Names:[docker.io/rancher/mirrored-flannelcni-flannel@sha256:c9786f434d4663c924aeca1a2e479786d63df0d56c5d6bd62a64915f81d62ff0 docker.io/rancher/mirrored-flannelcni-flannel:v0.19.2],SizeBytes:20503771,},ContainerImage{Names:[registry.k8s.io/provider-aws/cloud-controller-manager@sha256:fdeb61e3e42ecd9cca868d550ebdb88dd6341d9e91fcfa9a37e227dab2ad22cb registry.k8s.io/provider-aws/cloud-controller-manager:v1.26.0],SizeBytes:20154862,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:8240918,},ContainerImage{Names:[registry.k8s.io/kops/kube-apiserver-healthcheck:1.26.0-beta.2],SizeBytes:4965793,},ContainerImage{Names:[docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin@sha256:28d3a6be9f450282bf42e4dad143d41da23e3d91f66f19c01ee7fd21fd17cb2b docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.0],SizeBytes:3821285,},ContainerImage{Names:[registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 20 17:17:01.352: INFO: Logging kubelet events for node i-02cae73514916eb60 Jan 20 17:17:01.385: INFO: Logging pods the kubelet thinks is on node i-02cae73514916eb60 Jan 20 17:17:01.433: INFO: kube-proxy-i-02cae73514916eb60 started at 2023-01-20 17:16:08 +0000 UTC (0+1 container statuses recorded) Jan 20 17:17:01.433: INFO: Container kube-proxy ready: true, restart count 1 Jan 20 17:17:01.433: INFO: dns-controller-74d4646d88-p7zxr started at 2023-01-20 17:07:01 +0000 UTC (0+1 container statuses recorded) Jan 20 17:17:01.433: INFO: Container dns-controller ready: true, restart count 1 Jan 20 17:17:01.433: INFO: ebs-csi-controller-c9fc69cf5-kn566 started at 2023-01-20 17:07:01 +0000 UTC (0+5 container statuses recorded) Jan 20 17:17:01.433: INFO: Container csi-attacher ready: true, restart count 2 Jan 20 17:17:01.433: INFO: Container csi-provisioner ready: true, restart count 2 Jan 20 17:17:01.433: INFO: Container csi-resizer ready: true, restart count 1 Jan 20 17:17:01.433: INFO: Container ebs-plugin ready: true, restart count 1 Jan 20 17:17:01.433: INFO: Container liveness-probe ready: true, restart count 1 Jan 20 17:17:01.433: INFO: aws-cloud-controller-manager-2qgs4 started at 2023-01-20 17:07:01 +0000 UTC (0+1 container statuses recorded) Jan 20 17:17:01.433: INFO: Container aws-cloud-controller-manager ready: true, restart count 2 Jan 20 17:17:01.433: INFO: etcd-manager-main-i-02cae73514916eb60 started at 2023-01-20 17:16:08 +0000 UTC (0+1 container statuses recorded) Jan 20 17:17:01.433: INFO: Container etcd-manager ready: true, restart count 1 Jan 20 17:17:01.433: INFO: kube-apiserver-i-02cae73514916eb60 started at 2023-01-20 17:16:08 +0000 UTC (0+2 container statuses recorded) Jan 20 17:17:01.433: INFO: Container healthcheck ready: true, restart count 1 Jan 20 17:17:01.433: INFO: Container kube-apiserver ready: true, restart count 2 Jan 20 17:17:01.433: INFO: kube-controller-manager-i-02cae73514916eb60 started at 2023-01-20 17:06:00 +0000 UTC (0+1 container statuses recorded) Jan 20 17:17:01.433: INFO: Container kube-controller-manager ready: true, restart count 4 Jan 20 17:17:01.433: INFO: kube-flannel-ds-5nkqq started at 2023-01-20 17:06:58 +0000 UTC (2+1 container statuses recorded) Jan 20 17:17:01.433: INFO: Init container install-cni-plugin ready: true, restart count 1 Jan 20 17:17:01.433: INFO: Init container install-cni ready: true, restart count 0 Jan 20 17:17:01.433: INFO: Container kube-flannel ready: true, restart count 1 Jan 20 17:17:01.433: INFO: kops-controller-mqtlq started at 2023-01-20 17:07:01 +0000 UTC (0+1 container statuses recorded) Jan 20 17:17:01.433: INFO: Container kops-controller ready: true, restart count 2 Jan 20 17:17:01.433: INFO: etcd-manager-events-i-02cae73514916eb60 started at 2023-01-20 17:16:08 +0000 UTC (0+1 container statuses recorded) Jan 20 17:17:01.433: INFO: Container etcd-manager ready: true, restart count 1 Jan 20 17:17:01.433: INFO: kube-scheduler-i-02cae73514916eb60 started at 2023-01-20 17:16:08 +0000 UTC (0+1 container statuses recorded) Jan 20 17:17:01.433: INFO: Container kube-scheduler ready: true, restart count 1 Jan 20 17:17:01.433: INFO: ebs-csi-node-lfls8 started at 2023-01-20 17:06:58 +0000 UTC (0+3 container statuses recorded) Jan 20 17:17:01.433: INFO: Container ebs-plugin ready: true, restart count 1 Jan 20 17:17:01.433: INFO: Container liveness-probe ready: true, restart count 1 Jan 20 17:17:01.433: INFO: Container node-driver-registrar ready: true, restart count 1 Jan 20 17:17:01.587: INFO: Latency metrics for node i-02cae73514916eb60 Jan 20 17:17:01.587: INFO: Logging node info for node i-03af3dbca738ba168 Jan 20 17:17:01.617: INFO: Node Info: &Node{ObjectMeta:{i-03af3dbca738ba168 f2b83166-36e9-4e14-8fe3-7e4da5f5a758 7345 0 2023-01-20 17:07:52 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:us-east-1 failure-domain.beta.kubernetes.io/zone:us-east-1a kubernetes.io/arch:amd64 kubernetes.io/hostname:i-03af3dbca738ba168 kubernetes.io/os:linux node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:us-east-1a topology.hostpath.csi/node:i-03af3dbca738ba168 topology.kubernetes.io/region:us-east-1 topology.kubernetes.io/zone:us-east-1a] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-9981":"i-03af3dbca738ba168","ebs.csi.aws.com":"i-03af3dbca738ba168"} flannel.alpha.coreos.com/backend-data:{"VNI":1,"VtepMAC":"26:10:99:e2:a4:c5"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:172.20.58.114 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{aws-cloud-controller-manager Update v1 2023-01-20 17:07:52 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:beta.kubernetes.io/instance-type":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {aws-cloud-controller-manager Update v1 2023-01-20 17:07:52 +0000 UTC FieldsV1 {"f:status":{"f:addresses":{"k:{\"type\":\"ExternalDNS\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"ExternalIP\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"Hostname\"}":{"f:address":{}},"k:{\"type\":\"InternalDNS\"}":{".":{},"f:address":{},"f:type":{}}}}} status} {kops-controller Update v1 2023-01-20 17:07:52 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:node-role.kubernetes.io/node":{}}}} } {kubelet Update v1 2023-01-20 17:07:52 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}}} } {kube-controller-manager Update v1 2023-01-20 17:07:56 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.2.0/24\"":{}}}} } {flanneld Update v1 2023-01-20 17:08:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2023-01-20 17:16:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.2.0/24,DoNotUseExternalID:,ProviderID:aws:///us-east-1a/i-03af3dbca738ba168,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{47441653760 0} {<nil>} 46329740Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4054794240 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{42697488314 0} {<nil>} 42697488314 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3949936640 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-20 17:08:29 +0000 UTC,LastTransitionTime:2023-01-20 17:08:29 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-20 17:16:15 +0000 UTC,LastTransitionTime:2023-01-20 17:07:42 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-20 17:16:15 +0000 UTC,LastTransitionTime:2023-01-20 17:07:42 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-20 17:16:15 +0000 UTC,LastTransitionTime:2023-01-20 17:07:42 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-20 17:16:15 +0000 UTC,LastTransitionTime:2023-01-20 17:07:56 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.58.114,},NodeAddress{Type:ExternalIP,Address:54.92.220.56,},NodeAddress{Type:InternalDNS,Address:i-03af3dbca738ba168.ec2.internal,},NodeAddress{Type:Hostname,Address:i-03af3dbca738ba168.ec2.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-54-92-220-56.compute-1.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec2a474c9a9b98f9bdaf7a97ffdf305e,SystemUUID:ec2a474c-9a9b-98f9-bdaf-7a97ffdf305e,BootID:7a359fbe-a27d-4b83-b283-7431ad35b17d,KernelVersion:5.15.81-flatcar,OSImage:Flatcar Container Linux by Kinvolk 3432.1.0 (Oklo),ContainerRuntimeVersion:containerd://1.6.10,KubeletVersion:v1.26.1,KubeProxyVersion:v1.26.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.26.1],SizeBytes:67205316,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:f0c5de192d832e7c1daa6580d4a62e8fa6fc8eabc0917ae4cb7ed4d15e95b59e registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.1],SizeBytes:29725845,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[docker.io/rancher/mirrored-flannelcni-flannel@sha256:c9786f434d4663c924aeca1a2e479786d63df0d56c5d6bd62a64915f81d62ff0 docker.io/rancher/mirrored-flannelcni-flannel:v0.19.2],SizeBytes:20503771,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf registry.k8s.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:18651485,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:8240918,},ContainerImage{Names:[docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin@sha256:28d3a6be9f450282bf42e4dad143d41da23e3d91f66f19c01ee7fd21fd17cb2b docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.0],SizeBytes:3821285,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 20 17:17:01.617: INFO: Logging kubelet events for node i-03af3dbca738ba168 Jan 20 17:17:01.649: INFO: Logging pods the kubelet thinks is on node i-03af3dbca738ba168 Jan 20 17:17:01.697: INFO: downwardapi-volume-323fa725-b106-4331-8cab-c47e7e66a6b7 started at 2023-01-20 17:16:53 +0000 UTC (0+1 container statuses recorded) Jan 20 17:17:01.697: INFO: Container client-container ready: false, restart count 0 Jan 20 17:17:01.697: INFO: boom-server started at 2023-01-20 17:14:35 +0000 UTC (0+1 container statuses recorded) Jan 20 17:17:01.697: INFO: Container boom-server ready: true, restart count 0 Jan 20 17:17:01.697: INFO: csi-mockplugin-0 started at 2023-01-20 17:16:55 +0000 UTC (0+3 container statuses recorded) Jan 20 17:17:01.697: INFO: Container csi-provisioner ready: true, restart count 0 Jan 20 17:17:01.697: INFO: Container driver-registrar ready: true, restart count 0 Jan 20 17:17:01.697: INFO: Container mock ready: true, restart count 0 Jan 20 17:17:01.697: INFO: hostexec-i-03af3dbca738ba168-q6k7b started at 2023-01-20 17:14:55 +0000 UTC (0+1 container statuses recorded) Jan 20 17:17:01.697: INFO: Container agnhost-container ready: true, restart count 0 Jan 20 17:17:01.697: INFO: csi-mockplugin-attacher-0 started at 2023-01-20 17:16:55 +0000 UTC (0+1 container statuses recorded) Jan 20 17:17:01.697: INFO: Container csi-attacher ready: false, restart count 0 Jan 20 17:17:01.697: INFO: local-injector started at 2023-01-20 17:16:56 +0000 UTC (0+1 container statuses recorded) Jan 20 17:17:01.697: INFO: Container local-injector ready: false, restart count 0 Jan 20 17:17:01.697: INFO: kube-flannel-ds-6vmgt started at 2023-01-20 17:07:53 +0000 UTC (2+1 container statuses recorded) Jan 20 17:17:01.697: INFO: Init container install-cni-plugin ready: true, restart count 0 Jan 20 17:17:01.697: INFO: Init container install-cni ready: true, restart count 0 Jan 20 17:17:01.697: INFO: Container kube-flannel ready: true, restart count 1 Jan 20 17:17:01.697: INFO: service-proxy-disabled-x6wst started at 2023-01-20 17:14:55 +0000 UTC (0+1 container statuses recorded) Jan 20 17:17:01.697: INFO: Container service-proxy-disabled ready: true, restart count 0 Jan 20 17:17:01.697: INFO: service-proxy-toggled-zghmz started at 2023-01-20 17:15:04 +0000 UTC (0+1 container statuses recorded) Jan 20 17:17:01.697: INFO: Container service-proxy-toggled ready: true, restart count 0 Jan 20 17:17:01.697: INFO: hostexec-i-03af3dbca738ba168-4qz69 started at 2023-01-20 17:15:13 +0000 UTC (0+1 container statuses recorded) Jan 20 17:17:01.697: INFO: Container agnhost-container ready: true, restart count 0 Jan 20 17:17:01.697: INFO: pvc-volume-tester-jqms5 started at <nil> (0+0 container statuses recorded) Jan 20 17:17:01.697: INFO: kube-proxy-i-03af3dbca738ba168 started at 2023-01-20 17:07:42 +0000 UTC (0+1 container statuses recorded) Jan 20 17:17:01.697: INFO: Container kube-proxy ready: true, restart count 0 Jan 20 17:17:01.697: INFO: ebs-csi-node-8nk5p started at 2023-01-20 17:07:53 +0000 UTC (0+3 container statuses recorded) Jan 20 17:17:01.697: INFO: Container ebs-plugin ready: true, restart count 0 Jan 20 17:17:01.697: INFO: Container liveness-probe ready: true, restart count 0 Jan 20 17:17:01.697: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 20 17:17:01.697: INFO: coredns-559769c974-6f8t8 started at 2023-01-20 17:08:35 +0000 UTC (0+1 container statuses recorded) Jan 20 17:17:01.697: INFO: Container coredns ready: true, restart count 0 Jan 20 17:17:01.697: INFO: hostexec-i-03af3dbca738ba168-4lrj4 started at 2023-01-20 17:16:54 +0000 UTC (0+1 container statuses recorded) Jan 20 17:17:01.697: INFO: Container agnhost-container ready: true, restart count 0 Jan 20 17:17:01.698: INFO: netserver-0 started at 2023-01-20 17:15:26 +0000 UTC (0+1 container statuses recorded) Jan 20 17:17:01.698: INFO: Container webserver ready: true, restart count 0 Jan 20 17:17:01.698: INFO: hostexec-i-03af3dbca738ba168-4rdz8 started at 2023-01-20 17:16:54 +0000 UTC (0+1 container statuses recorded) Jan 20 17:17:01.698: INFO: Container agnhost-container ready: true, restart count 0 Jan 20 17:17:01.912: INFO: Latency metrics for node i-03af3dbca738ba168 Jan 20 17:17:01.912: INFO: Logging node info for node i-0460dbd3e490039bb Jan 20 17:17:01.942: INFO: Node Info: &Node{ObjectMeta:{i-0460dbd3e490039bb 3ed25acd-2f33-4687-a606-3d5a944590c8 6938 0 2023-01-20 17:07:53 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:us-east-1 failure-domain.beta.kubernetes.io/zone:us-east-1a kubernetes.io/arch:amd64 kubernetes.io/hostname:i-0460dbd3e490039bb kubernetes.io/os:linux node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:us-east-1a topology.kubernetes.io/region:us-east-1 topology.kubernetes.io/zone:us-east-1a] map[csi.volume.kubernetes.io/nodeid:{"ebs.csi.aws.com":"i-0460dbd3e490039bb"} flannel.alpha.coreos.com/backend-data:{"VNI":1,"VtepMAC":"0a:dc:21:c8:4e:3e"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:172.20.44.83 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{aws-cloud-controller-manager Update v1 2023-01-20 17:07:53 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:beta.kubernetes.io/instance-type":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {aws-cloud-controller-manager Update v1 2023-01-20 17:07:53 +0000 UTC FieldsV1 {"f:status":{"f:addresses":{"k:{\"type\":\"ExternalDNS\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"ExternalIP\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"Hostname\"}":{"f:address":{}},"k:{\"type\":\"InternalDNS\"}":{".":{},"f:address":{},"f:type":{}}}}} status} {kops-controller Update v1 2023-01-20 17:07:53 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:node-role.kubernetes.io/node":{}}}} } {kubelet Update v1 2023-01-20 17:07:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}}} } {kube-controller-manager Update v1 2023-01-20 17:10:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.3.0/24\"":{}}}} } {flanneld Update v1 2023-01-20 17:11:02 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2023-01-20 17:15:25 +0000 UTC FieldsV1 {"f:status":{"f:volumesAttached":{}}} status} {kubelet Update v1 2023-01-20 17:16:27 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{},"f:memory":{}},"f:capacity":{"f:memory":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{},"f:kernelVersion":{},"f:osImage":{}},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.3.0/24,DoNotUseExternalID:,ProviderID:aws:///us-east-1a/i-0460dbd3e490039bb,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{47441653760 0} {<nil>} 46329740Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4054806528 0} {<nil>} 3959772Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{42697488314 0} {<nil>} 42697488314 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3949948928 0} {<nil>} 3857372Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-20 17:11:02 +0000 UTC,LastTransitionTime:2023-01-20 17:11:02 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-20 17:16:27 +0000 UTC,LastTransitionTime:2023-01-20 17:07:32 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-20 17:16:27 +0000 UTC,LastTransitionTime:2023-01-20 17:07:32 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-20 17:16:27 +0000 UTC,LastTransitionTime:2023-01-20 17:07:32 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-20 17:16:27 +0000 UTC,LastTransitionTime:2023-01-20 17:10:57 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.44.83,},NodeAddress{Type:ExternalIP,Address:3.85.92.171,},NodeAddress{Type:InternalDNS,Address:i-0460dbd3e490039bb.ec2.internal,},NodeAddress{Type:Hostname,Address:i-0460dbd3e490039bb.ec2.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-3-85-92-171.compute-1.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec214ec8f7aec9bca6997e12c5d9fa17,SystemUUID:ec214ec8-f7ae-c9bc-a699-7e12c5d9fa17,BootID:6958a09a-b123-4522-ba50-97e69196d1e0,KernelVersion:5.15.86-flatcar,OSImage:Flatcar Container Linux by Kinvolk 3446.1.0 (Oklo),ContainerRuntimeVersion:containerd://1.6.10,KubeletVersion:v1.26.1,KubeProxyVersion:v1.26.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c registry.k8s.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.26.1],SizeBytes:67205316,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:f0c5de192d832e7c1daa6580d4a62e8fa6fc8eabc0917ae4cb7ed4d15e95b59e registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.1],SizeBytes:29725845,},ContainerImage{Names:[docker.io/rancher/mirrored-flannelcni-flannel@sha256:c9786f434d4663c924aeca1a2e479786d63df0d56c5d6bd62a64915f81d62ff0 docker.io/rancher/mirrored-flannelcni-flannel:v0.19.2],SizeBytes:20503771,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:8240918,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:5c99cf6a02adda929b10321dbf4ecfa00d87be9ba4fb456006237d530ab4baa1 registry.k8s.io/e2e-test-images/nginx:1.14-4],SizeBytes:6978614,},ContainerImage{Names:[docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin@sha256:28d3a6be9f450282bf42e4dad143d41da23e3d91f66f19c01ee7fd21fd17cb2b docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.0],SizeBytes:3821285,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[kubernetes.io/csi/ebs.csi.aws.com^vol-0bfed00e78ca4b211],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/ebs.csi.aws.com^vol-0bfed00e78ca4b211,DevicePath:,},},Config:nil,},} Jan 20 17:17:01.943: INFO: Logging kubelet events for node i-0460dbd3e490039bb Jan 20 17:17:01.981: INFO: Logging pods the kubelet thinks is on node i-0460dbd3e490039bb Jan 20 17:17:02.039: INFO: simpletest.rc-jrszk started at 2023-01-20 17:15:26 +0000 UTC (0+1 container statuses recorded) Jan 20 17:17:02.039: INFO: Container nginx ready: true, restart count 0 Jan 20 17:17:02.039: INFO: service-proxy-disabled-hc668 started at 2023-01-20 17:14:55 +0000 UTC (0+1 container statuses recorded) Jan 20 17:17:02.039: INFO: Container service-proxy-disabled ready: true, restart count 0 Jan 20 17:17:02.039: INFO: service-proxy-toggled-bvmzm started at 2023-01-20 17:15:04 +0000 UTC (0+1 container statuses recorded) Jan 20 17:17:02.039: INFO: Container service-proxy-toggled ready: true, restart count 0 Jan 20 17:17:02.039: INFO: verify-service-down-host-exec-pod started at 2023-01-20 17:15:22 +0000 UTC (0+1 container statuses recorded) Jan 20 17:17:02.039: INFO: Container agnhost-container ready: true, restart count 0 Jan 20 17:17:02.039: INFO: pod-always-succeed10a3afcf-d7a5-4b14-8068-c987f595e56b started at 2023-01-20 17:16:54 +0000 UTC (1+1 container statuses recorded) Jan 20 17:17:02.039: INFO: Init container foo ready: false, restart count 0 Jan 20 17:17:02.039: INFO: Container bar ready: false, restart count 0 Jan 20 17:17:02.039: INFO: all-pods-removed-nncwn started at <nil> (0+0 container statuses recorded) Jan 20 17:17:02.039: INFO: hostexec-i-0460dbd3e490039bb-gl7xm started at 2023-01-20 17:15:20 +0000 UTC (0+1 container statuses recorded) Jan 20 17:17:02.039: INFO: Container agnhost-container ready: true, restart count 0 Jan 20 17:17:02.039: INFO: sample-webhook-deployment-865554f4d9-zn8dn started at <nil> (0+0 container statuses recorded) Jan 20 17:17:02.039: INFO: pfpod started at 2023-01-20 17:16:54 +0000 UTC (0+2 container statuses recorded) Jan 20 17:17:02.039: INFO: Container portforwardtester ready: false, restart count 0 Jan 20 17:17:02.039: INFO: Container readiness ready: false, restart count 0 Jan 20 17:17:02.039: INFO: netserver-1 started at 2023-01-20 17:15:26 +0000 UTC (0+1 container statuses recorded) Jan 20 17:17:02.039: INFO: Container webserver ready: true, restart count 0 Jan 20 17:17:02.039: INFO: pod-b272c6f7-e0c9-47be-90a6-3aaaca83050b started at <nil> (0+0 container statuses recorded) Jan 20 17:17:02.039: INFO: kube-flannel-ds-q8m2b started at 2023-01-20 17:07:53 +0000 UTC (2+1 container statuses recorded) Jan 20 17:17:02.039: INFO: Init container install-cni-plugin ready: true, restart count 1 Jan 20 17:17:02.039: INFO: Init container install-cni ready: true, restart count 0 Jan 20 17:17:02.039: INFO: Container kube-flannel ready: true, restart count 2 Jan 20 17:17:02.039: INFO: ebs-csi-node-kmj84 started at 2023-01-20 17:07:53 +0000 UTC (0+3 container statuses recorded) Jan 20 17:17:02.039: INFO: Container ebs-plugin ready: true, restart count 1 Jan 20 17:17:02.039: INFO: Container liveness-probe ready: true, restart count 1 Jan 20 17:17:02.039: INFO: Container node-driver-registrar ready: true, restart count 1 Jan 20 17:17:02.039: INFO: downwardapi-volume-65e507d7-2728-4f27-b145-837b0a794a2f started at 2023-01-20 17:15:24 +0000 UTC (0+1 container statuses recorded) Jan 20 17:17:02.039: INFO: Container client-container ready: false, restart count 0 Jan 20 17:17:02.039: INFO: pod-620a281e-f4ae-4084-afcf-4f3a73a7d4cb started at 2023-01-20 17:16:53 +0000 UTC (0+1 container statuses recorded) Jan 20 17:17:02.039: INFO: Container test-container ready: false, restart count 0 Jan 20 17:17:02.039: INFO: test-runtimeclass-runtimeclass-8659-unconfigured-handler-8np4l started at 2023-01-20 17:15:25 +0000 UTC (0+1 container statuses recorded) Jan 20 17:17:02.039: INFO: Container test ready: false, restart count 0 Jan 20 17:17:02.039: INFO: test-grpc-46eabcb2-0c4a-4520-810d-ba498e0fcbea started at 2023-01-20 17:15:20 +0000 UTC (0+1 container statuses recorded) Jan 20 17:17:02.039: INFO: Container etcd ready: true, restart count 0 Jan 20 17:17:02.039: INFO: test-pod-1 started at 2023-01-20 17:15:25 +0000 UTC (0+1 container statuses recorded) Jan 20 17:17:02.039: INFO: Container token-test ready: true, restart count 0 Jan 20 17:17:02.039: INFO: image-pull-testb94241a7-7ddf-4268-bea6-32afdd428c63 started at <nil> (0+0 container statuses recorded) Jan 20 17:17:02.039: INFO: kube-proxy-i-0460dbd3e490039bb started at 2023-01-20 17:07:33 +0000 UTC (0+1 container statuses recorded) Jan 20 17:17:02.039: INFO: Container kube-proxy ready: true, restart count 1 Jan 20 17:17:02.039: INFO: pod-projected-configmaps-7d2edb5e-c55d-4569-b561-000b9319ecaa started at 2023-01-20 17:16:56 +0000 UTC (0+1 container statuses recorded) Jan 20 17:17:02.039: INFO: Container agnhost-container ready: false, restart count 0 Jan 20 17:17:02.039: INFO: pod-hostip-cdcdbd78-97b5-4231-9e71-6cad75910d62 started at 2023-01-20 17:16:56 +0000 UTC (0+1 container statuses recorded) Jan 20 17:17:02.039: INFO: Container test ready: false, restart count 0 Jan 20 17:17:02.039: INFO: hostexec-i-0460dbd3e490039bb-6ffgj started at 2023-01-20 17:16:54 +0000 UTC (0+1 container statuses recorded) Jan 20 17:17:02.039: INFO: Container agnhost-container ready: false, restart count 0 Jan 20 17:17:02.039: INFO: pod-d9b2c311-b86f-4135-a026-635f052e5073 started at 2023-01-20 17:15:13 +0000 UTC (0+1 container statuses recorded) Jan 20 17:17:02.039: INFO: Container write-pod ready: true, restart count 0 Jan 20 17:17:02.039: INFO: all-pods-removed-9hknd started at <nil> (0+0 container statuses recorded) Jan 20 17:17:02.039: INFO: sample-webhook-deployment-865554f4d9-24689 started at <nil> (0+0 container statuses recorded) Jan 20 17:17:02.039: INFO: busybox-readonly-true-8d2f3eba-86d7-4650-a910-0a0caa9dae9c started at <nil> (0+0 container statuses recorded) Jan 20 17:17:02.039: INFO: startup-04b7934a-c3e8-415c-ba2f-32e3d709e2f1 started at 2023-01-20 17:14:57 +0000 UTC (0+1 container statuses recorded) Jan 20 17:17:02.039: INFO: Container busybox ready: false, restart count 0 Jan 20 17:17:02.389: INFO: Latency metrics for node i-0460dbd3e490039bb Jan 20 17:17:02.389: INFO: Logging node info for node i-048afc59cd0c5fa4a Jan 20 17:17:02.417: INFO: Node Info: &Node{ObjectMeta:{i-048afc59cd0c5fa4a 906bdaca-cfdb-4619-98d1-2751663efe41 7088 0 2023-01-20 17:07:51 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:us-east-1 failure-domain.beta.kubernetes.io/zone:us-east-1a kubernetes.io/arch:amd64 kubernetes.io/hostname:i-048afc59cd0c5fa4a kubernetes.io/os:linux node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:us-east-1a topology.hostpath.csi/node:i-048afc59cd0c5fa4a topology.kubernetes.io/region:us-east-1 topology.kubernetes.io/zone:us-east-1a] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-volumemode-4314":"i-048afc59cd0c5fa4a","csi-mock-csi-mock-volumes-3661":"i-048afc59cd0c5fa4a","ebs.csi.aws.com":"i-048afc59cd0c5fa4a"} flannel.alpha.coreos.com/backend-data:{"VNI":1,"VtepMAC":"9e:9c:df:5f:98:2b"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:172.20.41.86 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{aws-cloud-controller-manager Update v1 2023-01-20 17:07:51 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:beta.kubernetes.io/instance-type":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {aws-cloud-controller-manager Update v1 2023-01-20 17:07:51 +0000 UTC FieldsV1 {"f:status":{"f:addresses":{"k:{\"type\":\"ExternalDNS\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"ExternalIP\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"Hostname\"}":{"f:address":{}},"k:{\"type\":\"InternalDNS\"}":{".":{},"f:address":{},"f:type":{}}}}} status} {kops-controller Update v1 2023-01-20 17:07:51 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:node-role.kubernetes.io/node":{}}}} } {kubelet Update v1 2023-01-20 17:07:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}}} } {kube-controller-manager Update v1 2023-01-20 17:07:54 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.1.0/24\"":{}}}} } {flanneld Update v1 2023-01-20 17:08:27 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2023-01-20 17:16:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.1.0/24,DoNotUseExternalID:,ProviderID:aws:///us-east-1a/i-048afc59cd0c5fa4a,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{47441653760 0} {<nil>} 46329740Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4054786048 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{42697488314 0} {<nil>} 42697488314 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3949928448 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-20 17:08:27 +0000 UTC,LastTransitionTime:2023-01-20 17:08:27 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-20 17:16:15 +0000 UTC,LastTransitionTime:2023-01-20 17:07:30 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-20 17:16:15 +0000 UTC,LastTransitionTime:2023-01-20 17:07:30 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-20 17:16:15 +0000 UTC,LastTransitionTime:2023-01-20 17:07:30 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-20 17:16:15 +0000 UTC,LastTransitionTime:2023-01-20 17:07:54 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.41.86,},NodeAddress{Type:ExternalIP,Address:34.201.135.194,},NodeAddress{Type:InternalDNS,Address:i-048afc59cd0c5fa4a.ec2.internal,},NodeAddress{Type:Hostname,Address:i-048afc59cd0c5fa4a.ec2.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-34-201-135-194.compute-1.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec2886bb32c49932d355813f2015452a,SystemUUID:ec2886bb-32c4-9932-d355-813f2015452a,BootID:282ccca5-5996-4f4c-a14f-de0d630f9cd9,KernelVersion:5.15.81-flatcar,OSImage:Flatcar Container Linux by Kinvolk 3432.1.0 (Oklo),ContainerRuntimeVersion:containerd://1.6.10,KubeletVersion:v1.26.1,KubeProxyVersion:v1.26.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.26.1],SizeBytes:67205316,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:f0c5de192d832e7c1daa6580d4a62e8fa6fc8eabc0917ae4cb7ed4d15e95b59e registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.1],SizeBytes:29725845,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[docker.io/rancher/mirrored-flannelcni-flannel@sha256:c9786f434d4663c924aeca1a2e479786d63df0d56c5d6bd62a64915f81d62ff0 docker.io/rancher/mirrored-flannelcni-flannel:v0.19.2],SizeBytes:20503771,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:68d396900aeaa072c1f27289485fdac29834045a6f3ffe369bf389d830ef572d registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.6],SizeBytes:20293261,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:8240918,},ContainerImage{Names:[docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin@sha256:28d3a6be9f450282bf42e4dad143d41da23e3d91f66f19c01ee7fd21fd17cb2b docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.0],SizeBytes:3821285,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 20 17:17:02.418: INFO: Logging kubelet events for node i-048afc59cd0c5fa4a Jan 20 17:17:02.450: INFO: Logging pods the kubelet thinks is on node i-048afc59cd0c5fa4a Jan 20 17:17:02.496: INFO: ebs-csi-node-c9wzq started at 2023-01-20 17:07:51 +0000 UTC (0+3 container statuses recorded) Jan 20 17:17:02.496: INFO: Container ebs-plugin ready: true, restart count 0 Jan 20 17:17:02.496: INFO: Container liveness-probe ready: true, restart count 0 Jan 20 17:17:02.496: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 20 17:17:02.496: INFO: kube-flannel-ds-hds7n started at 2023-01-20 17:07:51 +0000 UTC (2+1 container statuses recorded) Jan 20 17:17:02.496: INFO: Init container install-cni-plugin ready: true, restart count 0 Jan 20 17:17:02.496: INFO: Init container install-cni ready: true, restart count 0 Jan 20 17:17:02.496: INFO: Container kube-flannel ready: true, restart count 1 Jan 20 17:17:02.496: INFO: coredns-autoscaler-7cb5c5b969-kxr22 started at 2023-01-20 17:07:54 +0000 UTC (0+1 container statuses recorded) Jan 20 17:17:02.496: INFO: Container autoscaler ready: true, restart count 0 Jan 20 17:17:02.496: INFO: startup-script started at 2023-01-20 17:14:41 +0000 UTC (0+1 container statuses recorded) Jan 20 17:17:02.496: INFO: Container startup-script ready: true, restart count 0 Jan 20 17:17:02.496: INFO: netserver-2 started at 2023-01-20 17:15:26 +0000 UTC (0+1 container statuses recorded) Jan 20 17:17:02.496: INFO: Container webserver ready: true, restart count 0 Jan 20 17:17:02.496: INFO: kube-proxy-i-048afc59cd0c5fa4a started at 2023-01-20 17:07:31 +0000 UTC (0+1 container statuses recorded) Jan 20 17:17:02.496: INFO: Container kube-proxy ready: true, restart count 0 Jan 20 17:17:02.496: INFO: csi-mockplugin-0 started at 2023-01-20 17:14:54 +0000 UTC (0+3 container statuses recorded) Jan 20 17:17:02.496: INFO: Container csi-provisioner ready: true, restart count 0 Jan 20 17:17:02.496: INFO: Container driver-registrar ready: true, restart count 0 Jan 20 17:17:02.496: INFO: Container mock ready: true, restart count 0 Jan 20 17:17:02.496: INFO: csi-mockplugin-resizer-0 started at 2023-01-20 17:14:54 +0000 UTC (0+1 container statuses recorded) Jan 20 17:17:02.496: INFO: Container csi-resizer ready: true, restart count 0 Jan 20 17:17:02.496: INFO: coredns-559769c974-mkzlp started at 2023-01-20 17:07:54 +0000 UTC (0+1 container statuses recorded) Jan 20 17:17:02.496: INFO: Container coredns ready: true, restart count 0 Jan 20 17:17:02.496: INFO: csi-hostpathplugin-0 started at 2023-01-20 17:15:15 +0000 UTC (0+7 container statuses recorded) Jan 20 17:17:02.496: INFO: Container csi-attacher ready: true, restart count 0 Jan 20 17:17:02.496: INFO: Container csi-provisioner ready: true, restart count 0 Jan 20 17:17:02.496: INFO: Container csi-resizer ready: true, restart count 0 Jan 20 17:17:02.496: INFO: Container csi-snapshotter ready: true, restart count 0 Jan 20 17:17:02.496: INFO: Container hostpath ready: true, restart count 0 Jan 20 17:17:02.496: INFO: Container liveness-probe ready: true, restart count 0 Jan 20 17:17:02.496: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 20 17:17:02.496: INFO: csi-mockplugin-attacher-0 started at 2023-01-20 17:14:54 +0000 UTC (0+1 container statuses recorded) Jan 20 17:17:02.496: INFO: Container csi-attacher ready: true, restart count 0 Jan 20 17:17:02.687: INFO: Latency metrics for node i-048afc59cd0c5fa4a Jan 20 17:17:02.687: INFO: Logging node info for node i-0f775d321e19704c3 Jan 20 17:17:02.715: INFO: Node Info: &Node{ObjectMeta:{i-0f775d321e19704c3 19607256-f185-404f-84dd-0198c716bca7 7520 0 2023-01-20 17:07:53 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:us-east-1 failure-domain.beta.kubernetes.io/zone:us-east-1a kubernetes.io/arch:amd64 kubernetes.io/hostname:i-0f775d321e19704c3 kubernetes.io/os:linux node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:us-east-1a topology.hostpath.csi/node:i-0f775d321e19704c3 topology.kubernetes.io/region:us-east-1 topology.kubernetes.io/zone:us-east-1a] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-ephemeral-4295":"i-0f775d321e19704c3","csi-hostpath-provisioning-761":"i-0f775d321e19704c3","ebs.csi.aws.com":"i-0f775d321e19704c3"} flannel.alpha.coreos.com/backend-data:{"VNI":1,"VtepMAC":"72:43:d6:40:e8:77"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:172.20.55.61 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-20 17:07:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}}} } {aws-cloud-controller-manager Update v1 2023-01-20 17:07:54 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:beta.kubernetes.io/instance-type":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {aws-cloud-controller-manager Update v1 2023-01-20 17:07:54 +0000 UTC FieldsV1 {"f:status":{"f:addresses":{"k:{\"type\":\"ExternalDNS\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"ExternalIP\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"Hostname\"}":{"f:address":{}},"k:{\"type\":\"InternalDNS\"}":{".":{},"f:address":{},"f:type":{}}}}} status} {kops-controller Update v1 2023-01-20 17:07:54 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:node-role.kubernetes.io/node":{}}}} } {kube-controller-manager Update v1 2023-01-20 17:09:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.4.0/24\"":{}}}} } {flanneld Update v1 2023-01-20 17:09:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2023-01-20 17:16:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{},"f:memory":{}},"f:capacity":{"f:memory":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{},"f:kernelVersion":{},"f:osImage":{}},"f:volumesInUse":{}}} status} {kube-controller-manager Update v1 2023-01-20 17:17:02 +0000 UTC FieldsV1 {"f:status":{"f:volumesAttached":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.4.0/24,DoNotUseExternalID:,ProviderID:aws:///us-east-1a/i-0f775d321e19704c3,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{47441653760 0} {<nil>} 46329740Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4054814720 0} {<nil>} 3959780Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{42697488314 0} {<nil>} 42697488314 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3949957120 0} {<nil>} 3857380Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-20 17:09:35 +0000 UTC,LastTransitionTime:2023-01-20 17:09:35 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-20 17:16:37 +0000 UTC,LastTransitionTime:2023-01-20 17:07:33 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-20 17:16:37 +0000 UTC,LastTransitionTime:2023-01-20 17:07:33 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-20 17:16:37 +0000 UTC,LastTransitionTime:2023-01-20 17:07:33 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-20 17:16:37 +0000 UTC,LastTransitionTime:2023-01-20 17:09:30 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.55.61,},NodeAddress{Type:ExternalIP,Address:3.93.201.229,},NodeAddress{Type:InternalDNS,Address:i-0f775d321e19704c3.ec2.internal,},NodeAddress{Type:Hostname,Address:i-0f775d321e19704c3.ec2.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-3-93-201-229.compute-1.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec2a4be20ed59f70fa8678b6d03004b4,SystemUUID:ec2a4be2-0ed5-9f70-fa86-78b6d03004b4,BootID:d3100caa-b833-4d03-b5c0-4cb4a8b87060,KernelVersion:5.15.86-flatcar,OSImage:Flatcar Container Linux by Kinvolk 3446.1.0 (Oklo),ContainerRuntimeVersion:containerd://1.6.10,KubeletVersion:v1.26.1,KubeProxyVersion:v1.26.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.26.1],SizeBytes:67205316,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:f0c5de192d832e7c1daa6580d4a62e8fa6fc8eabc0917ae4cb7ed4d15e95b59e registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.1],SizeBytes:29725845,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[docker.io/rancher/mirrored-flannelcni-flannel@sha256:c9786f434d4663c924aeca1a2e479786d63df0d56c5d6bd62a64915f81d62ff0 docker.io/rancher/mirrored-flannelcni-flannel:v0.19.2],SizeBytes:20503771,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:8240918,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:5c99cf6a02adda929b10321dbf4ecfa00d87be9ba4fb456006237d530ab4baa1 registry.k8s.io/e2e-test-images/nginx:1.14-4],SizeBytes:6978614,},ContainerImage{Names:[docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin@sha256:28d3a6be9f450282bf42e4dad143d41da23e3d91f66f19c01ee7fd21fd17cb2b docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.0],SizeBytes:3821285,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[kubernetes.io/csi/csi-mock-csi-mock-volumes-5223^c8620916-98e5-11ed-99ff-464f19649f33],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-mock-csi-mock-volumes-5223^c8620916-98e5-11ed-99ff-464f19649f33,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-761^4051c2ac-98e6-11ed-96b1-925e2ff94a94,DevicePath:,},},Config:nil,},} Jan 20 17:17:02.716: INFO: Logging kubelet events for node i-0f775d321e19704c3 Jan 20 17:17:02.747: INFO: Logging pods the kubelet thinks is on node i-0f775d321e19704c3 Jan 20 17:17:02.792: INFO: service-proxy-disabled-xwb98 started at 2023-01-20 17:14:55 +0000 UTC (0+1 container statuses recorded) Jan 20 17:17:02.792: INFO: Container service-proxy-disabled ready: true, restart count 0 Jan 20 17:17:02.792: INFO: pod-subpath-test-dynamicpv-dmpq started at 2023-01-20 17:17:01 +0000 UTC (1+2 container statuses recorded) Jan 20 17:17:02.792: INFO: Init container test-init-subpath-dynamicpv-dmpq ready: false, restart count 0 Jan 20 17:17:02.792: INFO: Container test-container-subpath-dynamicpv-dmpq ready: false, restart count 0 Jan 20 17:17:02.792: INFO: Container test-container-volume-dynamicpv-dmpq ready: false, restart count 0 Jan 20 17:17:02.792: INFO: csi-hostpathplugin-0 started at 2023-01-20 17:16:55 +0000 UTC (0+7 container statuses recorded) Jan 20 17:17:02.792: INFO: Container csi-attacher ready: true, restart count 0 Jan 20 17:17:02.792: INFO: Container csi-provisioner ready: true, restart count 0 Jan 20 17:17:02.792: INFO: Container csi-resizer ready: true, restart count 0 Jan 20 17:17:02.792: INFO: Container csi-snapshotter ready: true, restart count 0 Jan 20 17:17:02.792: INFO: Container hostpath ready: true, restart count 0 Jan 20 17:17:02.792: INFO: Container liveness-probe ready: true, restart count 0 Jan 20 17:17:02.792: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 20 17:17:02.792: INFO: simpletest.rc-9xd2k started at 2023-01-20 17:15:26 +0000 UTC (0+1 container statuses recorded) Jan 20 17:17:02.792: INFO: Container nginx ready: true, restart count 0 Jan 20 17:17:02.792: INFO: ebs-csi-node-74dsh started at 2023-01-20 17:07:54 +0000 UTC (0+3 container statuses recorded) Jan 20 17:17:02.792: INFO: Container ebs-plugin ready: true, restart count 1 Jan 20 17:17:02.792: INFO: Container liveness-probe ready: true, restart count 1 Jan 20 17:17:02.792: INFO: Container node-driver-registrar ready: true, restart count 1 Jan 20 17:17:02.792: INFO: inline-volume-tester2-6pxg2 started at <nil> (0+0 container statuses recorded) Jan 20 17:17:02.792: INFO: pvc-volume-tester-v7khp started at 2023-01-20 17:13:41 +0000 UTC (0+1 container statuses recorded) Jan 20 17:17:02.792: INFO: Container volume-tester ready: true, restart count 0 Jan 20 17:17:02.792: INFO: test-pod-3 started at 2023-01-20 17:15:25 +0000 UTC (0+1 container statuses recorded) Jan 20 17:17:02.792: INFO: Container token-test ready: true, restart count 0 Jan 20 17:17:02.792: INFO: csi-hostpathplugin-0 started at 2023-01-20 17:16:55 +0000 UTC (0+7 container statuses recorded) Jan 20 17:17:02.792: INFO: Container csi-attacher ready: true, restart count 0 Jan 20 17:17:02.792: INFO: Container csi-provisioner ready: true, restart count 0 Jan 20 17:17:02.792: INFO: Container csi-resizer ready: true, restart count 0 Jan 20 17:17:02.792: INFO: Container csi-snapshotter ready: true, restart count 0 Jan 20 17:17:02.792: INFO: Container hostpath ready: true, restart count 0 Jan 20 17:17:02.792: INFO: Container liveness-probe ready: true, restart count 0 Jan 20 17:17:02.792: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 20 17:17:02.792: INFO: kube-flannel-ds-d9rm4 started at 2023-01-20 17:07:54 +0000 UTC (2+1 container statuses recorded) Jan 20 17:17:02.792: INFO: Init container install-cni-plugin ready: true, restart count 1 Jan 20 17:17:02.792: INFO: Init container install-cni ready: true, restart count 0 Jan 20 17:17:02.792: INFO: Container kube-flannel ready: true, restart count 2 Jan 20 17:17:02.792: INFO: service-proxy-toggled-8j48l started at 2023-01-20 17:15:04 +0000 UTC (0+1 container statuses recorded) Jan 20 17:17:02.792: INFO: Container service-proxy-toggled ready: true, restart count 0 Jan 20 17:17:02.792: INFO: netserver-3 started at 2023-01-20 17:15:26 +0000 UTC (0+1 container statuses recorded) Jan 20 17:17:02.792: INFO: Container webserver ready: true, restart count 0 Jan 20 17:17:02.792: INFO: kube-proxy-i-0f775d321e19704c3 started at 2023-01-20 17:07:34 +0000 UTC (0+1 container statuses recorded) Jan 20 17:17:02.792: INFO: Container kube-proxy ready: true, restart count 1 Jan 20 17:17:02.792: INFO: test-pod-2 started at 2023-01-20 17:15:25 +0000 UTC (0+1 container statuses recorded) Jan 20 17:17:02.792: INFO: Container token-test ready: true, restart count 0 Jan 20 17:17:02.792: INFO: inline-volume-tester-ck2ff started at 2023-01-20 17:16:55 +0000 UTC (0+1 container statuses recorded) Jan 20 17:17:02.792: INFO: Container csi-volume-tester ready: true, restart count 0 Jan 20 17:17:03.109: INFO: Latency metrics for node i-0f775d321e19704c3 [DeferCleanup (Each)] [sig-storage] CSI mock volume tear down framework | framework.go:193 �[1mSTEP:�[0m Destroying namespace "csi-mock-volumes-5223" for this suite. �[38;5;243m01/20/23 17:17:03.109�[0m
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-storage\]\sCSI\smock\svolume\sDelegate\sFSGroup\sto\sCSI\sdriver\s\[LinuxOnly\]\sshould\snot\spass\sFSGroup\sto\sCSI\sdriver\sif\sit\sis\sset\sin\spod\sand\sdriver\ssupports\sVOLUME\_MOUNT\_GROUP$'
test/e2e/framework/debug/dump.go:44 k8s.io/kubernetes/test/e2e/framework/debug.dumpEventsInNamespace(0xc0042f3da0, {0xc0033c5ba0, 0x1a}) test/e2e/framework/debug/dump.go:44 +0x191 k8s.io/kubernetes/test/e2e/framework/debug.DumpAllNamespaceInfo({0x8022ee8, 0xc00293f520}, {0xc0033c5ba0, 0x1a}) test/e2e/framework/debug/dump.go:62 +0x8d k8s.io/kubernetes/test/e2e/framework/debug/init.init.0.func1.1(0xc0035ad460?, {0xc0033c5ba0?, 0x2?}) test/e2e/framework/debug/init/init.go:34 +0x32 k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach.func1() test/e2e/framework/framework.go:341 +0x82d k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc000e74e10) test/e2e/framework/framework.go:383 +0x1ca reflect.Value.call({0x662c060?, 0xc001383260?, 0x13?}, {0x75bb752, 0x4}, {0xae7b420, 0x0, 0x0?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x662c060?, 0xc001383260?, 0x2946afc?}, {0xae7b420?, 0xc003bf5780?, 0xc003bf5770?}) /usr/local/go/src/reflect/value.go:368 +0xbcfrom junit_01.xml
[BeforeEach] [sig-storage] CSI mock volume set up framework | framework.go:178 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/20/23 17:13:36.794�[0m Jan 20 17:13:36.794: INFO: >>> kubeConfig: /root/.kube/config �[1mSTEP:�[0m Building a namespace api object, basename csi-mock-volumes �[38;5;243m01/20/23 17:13:36.795�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/20/23 17:13:36.893�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/20/23 17:13:36.953�[0m [BeforeEach] [sig-storage] CSI mock volume test/e2e/framework/metrics/init/init.go:31 [It] should not pass FSGroup to CSI driver if it is set in pod and driver supports VOLUME_MOUNT_GROUP test/e2e/storage/csi_mock_volume.go:1771 �[1mSTEP:�[0m Building a driver namespace object, basename csi-mock-volumes-8834 �[38;5;243m01/20/23 17:13:37.014�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/20/23 17:13:37.109�[0m �[1mSTEP:�[0m deploying csi mock proxy �[38;5;243m01/20/23 17:13:37.169�[0m Jan 20 17:13:37.300: INFO: creating *v1.ServiceAccount: csi-mock-volumes-8834-3228/csi-attacher Jan 20 17:13:37.332: INFO: creating *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-8834 Jan 20 17:13:37.332: INFO: Define cluster role external-attacher-runner-csi-mock-volumes-8834 Jan 20 17:13:37.364: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-8834 Jan 20 17:13:37.396: INFO: creating *v1.Role: csi-mock-volumes-8834-3228/external-attacher-cfg-csi-mock-volumes-8834 Jan 20 17:13:37.427: INFO: creating *v1.RoleBinding: csi-mock-volumes-8834-3228/csi-attacher-role-cfg Jan 20 17:13:37.459: INFO: creating *v1.ServiceAccount: csi-mock-volumes-8834-3228/csi-provisioner Jan 20 17:13:37.496: INFO: creating *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-8834 Jan 20 17:13:37.496: INFO: Define cluster role external-provisioner-runner-csi-mock-volumes-8834 Jan 20 17:13:37.528: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-8834 Jan 20 17:13:37.561: INFO: creating *v1.Role: csi-mock-volumes-8834-3228/external-provisioner-cfg-csi-mock-volumes-8834 Jan 20 17:13:37.600: INFO: creating *v1.RoleBinding: csi-mock-volumes-8834-3228/csi-provisioner-role-cfg Jan 20 17:13:37.632: INFO: creating *v1.ServiceAccount: csi-mock-volumes-8834-3228/csi-resizer Jan 20 17:13:37.665: INFO: creating *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-8834 Jan 20 17:13:37.665: INFO: Define cluster role external-resizer-runner-csi-mock-volumes-8834 Jan 20 17:13:37.699: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-8834 Jan 20 17:13:37.733: INFO: creating *v1.Role: csi-mock-volumes-8834-3228/external-resizer-cfg-csi-mock-volumes-8834 Jan 20 17:13:37.764: INFO: creating *v1.RoleBinding: csi-mock-volumes-8834-3228/csi-resizer-role-cfg Jan 20 17:13:37.796: INFO: creating *v1.ServiceAccount: csi-mock-volumes-8834-3228/csi-snapshotter Jan 20 17:13:37.830: INFO: creating *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-8834 Jan 20 17:13:37.830: INFO: Define cluster role external-snapshotter-runner-csi-mock-volumes-8834 Jan 20 17:13:37.863: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-8834 Jan 20 17:13:37.900: INFO: creating *v1.Role: csi-mock-volumes-8834-3228/external-snapshotter-leaderelection-csi-mock-volumes-8834 Jan 20 17:13:37.939: INFO: creating *v1.RoleBinding: csi-mock-volumes-8834-3228/external-snapshotter-leaderelection Jan 20 17:13:38.013: INFO: creating *v1.ServiceAccount: csi-mock-volumes-8834-3228/csi-mock Jan 20 17:13:38.048: INFO: creating *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-8834 Jan 20 17:13:38.094: INFO: creating *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-8834 Jan 20 17:13:38.132: INFO: creating *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-8834 Jan 20 17:13:38.164: INFO: creating *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-8834 Jan 20 17:13:38.198: INFO: creating *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-8834 Jan 20 17:13:38.231: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-8834 Jan 20 17:13:38.263: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-8834 Jan 20 17:13:38.298: INFO: creating *v1.StatefulSet: csi-mock-volumes-8834-3228/csi-mockplugin Jan 20 17:13:38.338: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-8834 Jan 20 17:13:38.374: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-8834" Jan 20 17:13:38.410: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-8834 to register on node i-0f775d321e19704c3 I0120 17:13:50.203417 6667 csi.go:440] gRPCCall: {"Method":"/csi.v1.Identity/Probe","Request":{},"Response":{"ready":{"value":true}},"Error":"","FullError":null} I0120 17:13:50.234210 6667 csi.go:440] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-8834","vendor_version":"0.3.0","manifest":{"url":"https://github.com/kubernetes/kubernetes/tree/master/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null} I0120 17:13:50.268059 6667 csi.go:440] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"VolumeExpansion":{"type":1}}}]},"Error":"","FullError":null} I0120 17:13:50.297990 6667 csi.go:440] gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":10}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":12}}},{"Type":{"Rpc":{"type":11}}},{"Type":{"Rpc":{"type":9}}}]},"Error":"","FullError":null} I0120 17:13:50.564721 6667 csi.go:440] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-8834","vendor_version":"0.3.0","manifest":{"url":"https://github.com/kubernetes/kubernetes/tree/master/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null} I0120 17:13:50.908651 6667 csi.go:440] gRPCCall: {"Method":"/csi.v1.Node/NodeGetInfo","Request":{},"Response":{"node_id":"csi-mock-csi-mock-volumes-8834"},"Error":"","FullError":null} �[1mSTEP:�[0m Creating pod with fsGroup �[38;5;243m01/20/23 17:13:54.82�[0m Jan 20 17:13:54.854: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Jan 20 17:13:54.887: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-9hvbx] to have phase Bound I0120 17:13:54.906212 6667 csi.go:440] gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-51a8472a-655d-4c8c-bc74-a650a74fee8e","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}]},"Response":{"volume":{"capacity_bytes":1073741824,"volume_id":"4","volume_context":{"name":"pvc-51a8472a-655d-4c8c-bc74-a650a74fee8e"}}},"Error":"","FullError":null} Jan 20 17:13:54.924: INFO: PersistentVolumeClaim pvc-9hvbx found but phase is Pending instead of Bound. Jan 20 17:13:56.955: INFO: PersistentVolumeClaim pvc-9hvbx found and phase=Bound (2.067357751s) Jan 20 17:13:57.050: INFO: Waiting up to 5m0s for pod "pvc-volume-tester-vrkfc" in namespace "csi-mock-volumes-8834" to be "running" Jan 20 17:13:57.080: INFO: Pod "pvc-volume-tester-vrkfc": Phase="Pending", Reason="", readiness=false. Elapsed: 30.612658ms I0120 17:13:58.499090 6667 csi.go:440] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I0120 17:13:58.532290 6667 csi.go:440] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I0120 17:13:58.575318 6667 csi.go:440] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} Jan 20 17:13:58.614: INFO: >>> kubeConfig: /root/.kube/config Jan 20 17:13:58.615: INFO: ExecWithOptions: Clientset creation Jan 20 17:13:58.615: INFO: ExecWithOptions: execute(POST https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/csi-mock-volumes-8834-3228/pods/csi-mockplugin-0/exec?command=sh&command=-c&command=if+%21+%5B+-e+%27%2Fvar%2Flib%2Fkubelet%2Fplugins%2Fkubernetes.io%2Fcsi%2Fcsi-mock-csi-mock-volumes-8834%2F4b227777d4dd1fc61c6f884f48641d02b4d121d3fd328cb08b5531fcacdabf8a%2Fglobalmount%27+%5D%3B+then+echo+notexist%3B+elif+%5B+-d+%27%2Fvar%2Flib%2Fkubelet%2Fplugins%2Fkubernetes.io%2Fcsi%2Fcsi-mock-csi-mock-volumes-8834%2F4b227777d4dd1fc61c6f884f48641d02b4d121d3fd328cb08b5531fcacdabf8a%2Fglobalmount%27+%5D%3B+then+echo+dir%3B+else+echo+nodir%3B+fi&container=busybox&container=busybox&stderr=true&stdout=true) I0120 17:13:58.861197 6667 csi.go:440] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/csi-mock-csi-mock-volumes-8834/4b227777d4dd1fc61c6f884f48641d02b4d121d3fd328cb08b5531fcacdabf8a/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-51a8472a-655d-4c8c-bc74-a650a74fee8e","storage.kubernetes.io/csiProvisionerIdentity":"1674234830314-8081-csi-mock-csi-mock-volumes-8834"}},"Response":{},"Error":"","FullError":null} I0120 17:13:58.895086 6667 csi.go:440] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I0120 17:13:58.924695 6667 csi.go:440] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I0120 17:13:58.954339 6667 csi.go:440] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} Jan 20 17:13:58.986: INFO: >>> kubeConfig: /root/.kube/config Jan 20 17:13:58.986: INFO: ExecWithOptions: Clientset creation Jan 20 17:13:58.987: INFO: ExecWithOptions: execute(POST https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/csi-mock-volumes-8834-3228/pods/csi-mockplugin-0/exec?command=sh&command=-c&command=if+%21+%5B+-e+%27%2Fvar%2Flib%2Fkubelet%2Fpods%2F17fff7fb-b114-4236-b4c6-c9f7e7876ca2%2Fvolumes%2Fkubernetes.io~csi%2Fpvc-51a8472a-655d-4c8c-bc74-a650a74fee8e%2Fmount%27+%5D%3B+then+echo+notexist%3B+elif+%5B+-d+%27%2Fvar%2Flib%2Fkubelet%2Fpods%2F17fff7fb-b114-4236-b4c6-c9f7e7876ca2%2Fvolumes%2Fkubernetes.io~csi%2Fpvc-51a8472a-655d-4c8c-bc74-a650a74fee8e%2Fmount%27+%5D%3B+then+echo+dir%3B+else+echo+nodir%3B+fi&container=busybox&container=busybox&stderr=true&stdout=true) Jan 20 17:13:59.112: INFO: Pod "pvc-volume-tester-vrkfc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.06255436s Jan 20 17:13:59.265: INFO: >>> kubeConfig: /root/.kube/config Jan 20 17:13:59.266: INFO: ExecWithOptions: Clientset creation Jan 20 17:13:59.266: INFO: ExecWithOptions: execute(POST https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/csi-mock-volumes-8834-3228/pods/csi-mockplugin-0/exec?command=sh&command=-c&command=if+%21+%5B+-e+%27%2Fvar%2Flib%2Fkubelet%2Fpods%2F17fff7fb-b114-4236-b4c6-c9f7e7876ca2%2Fvolumes%2Fkubernetes.io~csi%2Fpvc-51a8472a-655d-4c8c-bc74-a650a74fee8e%2Fmount%27+%5D%3B+then+echo+notexist%3B+elif+%5B+-d+%27%2Fvar%2Flib%2Fkubelet%2Fpods%2F17fff7fb-b114-4236-b4c6-c9f7e7876ca2%2Fvolumes%2Fkubernetes.io~csi%2Fpvc-51a8472a-655d-4c8c-bc74-a650a74fee8e%2Fmount%27+%5D%3B+then+echo+dir%3B+else+echo+nodir%3B+fi&container=busybox&container=busybox&stderr=true&stdout=true) Jan 20 17:13:59.527: INFO: >>> kubeConfig: /root/.kube/config Jan 20 17:13:59.528: INFO: ExecWithOptions: Clientset creation Jan 20 17:13:59.528: INFO: ExecWithOptions: execute(POST https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/csi-mock-volumes-8834-3228/pods/csi-mockplugin-0/exec?command=mkdir&command=%2Fvar%2Flib%2Fkubelet%2Fpods%2F17fff7fb-b114-4236-b4c6-c9f7e7876ca2%2Fvolumes%2Fkubernetes.io~csi%2Fpvc-51a8472a-655d-4c8c-bc74-a650a74fee8e%2Fmount&container=busybox&container=busybox&stderr=true&stdout=true) I0120 17:13:59.785914 6667 csi.go:440] gRPCCall: {"Method":"/csi.v1.Node/NodePublishVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/csi-mock-csi-mock-volumes-8834/4b227777d4dd1fc61c6f884f48641d02b4d121d3fd328cb08b5531fcacdabf8a/globalmount","target_path":"/var/lib/kubelet/pods/17fff7fb-b114-4236-b4c6-c9f7e7876ca2/volumes/kubernetes.io~csi/pvc-51a8472a-655d-4c8c-bc74-a650a74fee8e/mount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-51a8472a-655d-4c8c-bc74-a650a74fee8e","storage.kubernetes.io/csiProvisionerIdentity":"1674234830314-8081-csi-mock-csi-mock-volumes-8834"}},"Response":{},"Error":"","FullError":null} Jan 20 17:14:01.111: INFO: Pod "pvc-volume-tester-vrkfc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.061687116s Jan 20 17:14:03.112: INFO: Pod "pvc-volume-tester-vrkfc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.062198593s Jan 20 17:14:05.112: INFO: Pod "pvc-volume-tester-vrkfc": Phase="Pending", Reason="", readiness=false. Elapsed: 8.062120635s Jan 20 17:14:07.111: INFO: Pod "pvc-volume-tester-vrkfc": Phase="Pending", Reason="", readiness=false. Elapsed: 10.061784261s Jan 20 17:14:09.112: INFO: Pod "pvc-volume-tester-vrkfc": Phase="Pending", Reason="", readiness=false. Elapsed: 12.062091173s Jan 20 17:14:11.118: INFO: Pod "pvc-volume-tester-vrkfc": Phase="Running", Reason="", readiness=true. Elapsed: 14.068280768s Jan 20 17:14:11.118: INFO: Pod "pvc-volume-tester-vrkfc" satisfied condition "running" �[1mSTEP:�[0m Deleting pod pvc-volume-tester-vrkfc �[38;5;243m01/20/23 17:14:11.118�[0m Jan 20 17:14:11.118: INFO: Deleting pod "pvc-volume-tester-vrkfc" in namespace "csi-mock-volumes-8834" Jan 20 17:14:11.154: INFO: Wait up to 5m0s for pod "pvc-volume-tester-vrkfc" to be fully deleted I0120 17:14:30.007021 6667 csi.go:440] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I0120 17:14:30.038923 6667 csi.go:440] gRPCCall: {"Method":"/csi.v1.Node/NodeGetVolumeStats","Request":{"volume_id":"4","volume_path":"/var/lib/kubelet/pods/17fff7fb-b114-4236-b4c6-c9f7e7876ca2/volumes/kubernetes.io~csi/pvc-51a8472a-655d-4c8c-bc74-a650a74fee8e/mount"},"Response":{"usage":[{"total":1073741824,"unit":1}],"volume_condition":{}},"Error":"","FullError":null} Jan 20 17:14:41.874: INFO: >>> kubeConfig: /root/.kube/config Jan 20 17:14:41.886: INFO: ExecWithOptions: Clientset creation Jan 20 17:14:41.886: INFO: ExecWithOptions: execute(POST https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/csi-mock-volumes-8834-3228/pods/csi-mockplugin-0/exec?command=rm&command=-rf&command=%2Fvar%2Flib%2Fkubelet%2Fpods%2F17fff7fb-b114-4236-b4c6-c9f7e7876ca2%2Fvolumes%2Fkubernetes.io~csi%2Fpvc-51a8472a-655d-4c8c-bc74-a650a74fee8e%2Fmount&container=busybox&container=busybox&stderr=true&stdout=true) I0120 17:14:42.133691 6667 csi.go:440] gRPCCall: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/17fff7fb-b114-4236-b4c6-c9f7e7876ca2/volumes/kubernetes.io~csi/pvc-51a8472a-655d-4c8c-bc74-a650a74fee8e/mount"},"Response":{},"Error":"","FullError":null} I0120 17:14:42.177203 6667 csi.go:440] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null} I0120 17:14:42.206654 6667 csi.go:440] gRPCCall: {"Method":"/csi.v1.Node/NodeUnstageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/csi-mock-csi-mock-volumes-8834/4b227777d4dd1fc61c6f884f48641d02b4d121d3fd328cb08b5531fcacdabf8a/globalmount"},"Response":{},"Error":"","FullError":null} �[1mSTEP:�[0m Deleting claim pvc-9hvbx �[38;5;243m01/20/23 17:14:45.218�[0m Jan 20 17:14:45.290: INFO: Waiting up to 2m0s for PersistentVolume pvc-51a8472a-655d-4c8c-bc74-a650a74fee8e to get deleted Jan 20 17:14:45.323: INFO: PersistentVolume pvc-51a8472a-655d-4c8c-bc74-a650a74fee8e found and phase=Bound (32.752489ms) I0120 17:14:45.334904 6667 csi.go:440] gRPCCall: {"Method":"/csi.v1.Controller/DeleteVolume","Request":{"volume_id":"4"},"Response":{},"Error":"","FullError":null} Jan 20 17:14:47.357: INFO: PersistentVolume pvc-51a8472a-655d-4c8c-bc74-a650a74fee8e was removed �[1mSTEP:�[0m Deleting storageclass csi-mock-volumes-8834-sc5c4hb �[38;5;243m01/20/23 17:14:47.357�[0m �[1mSTEP:�[0m Cleaning up resources �[38;5;243m01/20/23 17:14:47.394�[0m Jan 20 17:14:47.427: INFO: deleting .: csi-mock-csi-mock-volumes-8834 [AfterEach] [sig-storage] CSI mock volume test/e2e/framework/node/init/init.go:32 Jan 20 17:14:47.459: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] Delegate FSGroup to CSI driver [LinuxOnly] test/e2e/storage/drivers/csi.go:699 �[1mSTEP:�[0m deleting the test namespace: csi-mock-volumes-8834 �[38;5;243m01/20/23 17:14:47.494�[0m �[1mSTEP:�[0m Waiting for namespaces [csi-mock-volumes-8834] to vanish �[38;5;243m01/20/23 17:14:47.527�[0m �[1mSTEP:�[0m uninstalling csi mock driver �[38;5;243m01/20/23 17:14:53.568�[0m Jan 20 17:14:53.568: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-8834-3228/csi-attacher Jan 20 17:14:53.605: INFO: deleting *v1.ClusterRole: external-attacher-runner-csi-mock-volumes-8834 Jan 20 17:14:53.648: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-csi-mock-volumes-8834 Jan 20 17:14:53.683: INFO: deleting *v1.Role: csi-mock-volumes-8834-3228/external-attacher-cfg-csi-mock-volumes-8834 Jan 20 17:14:53.721: INFO: deleting *v1.RoleBinding: csi-mock-volumes-8834-3228/csi-attacher-role-cfg Jan 20 17:14:53.755: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-8834-3228/csi-provisioner Jan 20 17:14:53.788: INFO: deleting *v1.ClusterRole: external-provisioner-runner-csi-mock-volumes-8834 Jan 20 17:14:53.839: INFO: deleting *v1.ClusterRoleBinding: csi-provisioner-role-csi-mock-volumes-8834 Jan 20 17:14:53.880: INFO: deleting *v1.Role: csi-mock-volumes-8834-3228/external-provisioner-cfg-csi-mock-volumes-8834 Jan 20 17:14:53.914: INFO: deleting *v1.RoleBinding: csi-mock-volumes-8834-3228/csi-provisioner-role-cfg Jan 20 17:14:53.959: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-8834-3228/csi-resizer Jan 20 17:14:53.999: INFO: deleting *v1.ClusterRole: external-resizer-runner-csi-mock-volumes-8834 Jan 20 17:14:54.041: INFO: deleting *v1.ClusterRoleBinding: csi-resizer-role-csi-mock-volumes-8834 Jan 20 17:14:54.077: INFO: deleting *v1.Role: csi-mock-volumes-8834-3228/external-resizer-cfg-csi-mock-volumes-8834 Jan 20 17:14:54.116: INFO: deleting *v1.RoleBinding: csi-mock-volumes-8834-3228/csi-resizer-role-cfg Jan 20 17:14:54.154: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-8834-3228/csi-snapshotter Jan 20 17:14:54.193: INFO: deleting *v1.ClusterRole: external-snapshotter-runner-csi-mock-volumes-8834 Jan 20 17:14:54.228: INFO: deleting *v1.ClusterRoleBinding: csi-snapshotter-role-csi-mock-volumes-8834 Jan 20 17:14:54.271: INFO: deleting *v1.Role: csi-mock-volumes-8834-3228/external-snapshotter-leaderelection-csi-mock-volumes-8834 Jan 20 17:14:54.332: INFO: deleting *v1.RoleBinding: csi-mock-volumes-8834-3228/external-snapshotter-leaderelection Jan 20 17:14:54.381: INFO: deleting *v1.ServiceAccount: csi-mock-volumes-8834-3228/csi-mock Jan 20 17:14:54.417: INFO: deleting *v1.ClusterRoleBinding: csi-controller-attacher-role-csi-mock-volumes-8834 Jan 20 17:14:54.470: INFO: deleting *v1.ClusterRoleBinding: csi-controller-provisioner-role-csi-mock-volumes-8834 Jan 20 17:14:54.523: INFO: deleting *v1.ClusterRoleBinding: csi-controller-cluster-driver-registrar-role-csi-mock-volumes-8834 Jan 20 17:14:54.593: INFO: deleting *v1.ClusterRoleBinding: psp-csi-controller-driver-registrar-role-csi-mock-volumes-8834 Jan 20 17:14:54.669: INFO: deleting *v1.ClusterRoleBinding: csi-controller-resizer-role-csi-mock-volumes-8834 Jan 20 17:14:54.877: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-8834 Jan 20 17:14:54.990: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-8834 Jan 20 17:14:55.092: INFO: deleting *v1.StatefulSet: csi-mock-volumes-8834-3228/csi-mockplugin Jan 20 17:14:55.166: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-8834 �[1mSTEP:�[0m deleting the driver namespace: csi-mock-volumes-8834-3228 �[38;5;243m01/20/23 17:14:55.207�[0m �[1mSTEP:�[0m Waiting for namespaces [csi-mock-volumes-8834-3228] to vanish �[38;5;243m01/20/23 17:14:55.282�[0m Jan 20 17:15:48.563: INFO: error deleting namespace csi-mock-volumes-8834-3228: Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 100.26.139.144:443: connect: connection refused - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=441, ErrCode=NO_ERROR, debug="" [DeferCleanup (Each)] [sig-storage] CSI mock volume test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] CSI mock volume dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] CSI mock volume tear down framework | framework.go:193 �[1mSTEP:�[0m Destroying namespace "csi-mock-volumes-8834-3228" for this suite. �[38;5;243m01/20/23 17:15:48.563�[0m �[1mSTEP:�[0m Collecting events from namespace "csi-mock-volumes-8834-3228". �[38;5;243m01/20/23 17:15:48.608�[0m Jan 20 17:15:48.652: INFO: Unexpected error: failed to list events in namespace "csi-mock-volumes-8834-3228": <*url.Error | 0xc0035291d0>: { Op: "Get", URL: "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/csi-mock-volumes-8834-3228/events", Err: <*net.OpError | 0xc0021fcd20>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc002bca450>{ IP: [100, 26, 139, 144], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc0009c5920>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Jan 20 17:15:48.653: FAIL: failed to list events in namespace "csi-mock-volumes-8834-3228": Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/csi-mock-volumes-8834-3228/events": dial tcp 100.26.139.144:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework/debug.dumpEventsInNamespace(0xc0042f3da0, {0xc0033c5ba0, 0x1a}) test/e2e/framework/debug/dump.go:44 +0x191 k8s.io/kubernetes/test/e2e/framework/debug.DumpAllNamespaceInfo({0x8022ee8, 0xc00293f520}, {0xc0033c5ba0, 0x1a}) test/e2e/framework/debug/dump.go:62 +0x8d k8s.io/kubernetes/test/e2e/framework/debug/init.init.0.func1.1(0xc0035ad460?, {0xc0033c5ba0?, 0x2?}) test/e2e/framework/debug/init/init.go:34 +0x32 k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach.func1() test/e2e/framework/framework.go:341 +0x82d k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc000e74e10) test/e2e/framework/framework.go:383 +0x1ca reflect.Value.call({0x662c060?, 0xc001383260?, 0x13?}, {0x75bb752, 0x4}, {0xae7b420, 0x0, 0x0?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x662c060?, 0xc001383260?, 0x2946afc?}, {0xae7b420?, 0xc003bf5780?, 0xc003bf5770?}) /usr/local/go/src/reflect/value.go:368 +0xbc
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-storage\]\sDynamic\sProvisioning\sInvalid\sAWS\sKMS\skey\sshould\sreport\san\serror\sand\screate\sno\sPV$'
test/e2e/storage/volume_provisioning.go:765 k8s.io/kubernetes/test/e2e/storage.glob..func34.5.1({0x7f83e0076468, 0xc004b00dc0}) test/e2e/storage/volume_provisioning.go:765 +0x5b6 There were additional failures detected after the initial failure: [FAILED] Jan 20 17:15:48.691: delete storage class: Delete "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/apis/storage.k8s.io/v1/storageclasses/volume-provisioning-4363-invalid-awsqkpzq": dial tcp 100.26.139.144:443: connect: connection refused In [DeferCleanup (Each)] at: test/e2e/storage/testsuites/provisioning.go:558 ---------- [FAILED] Jan 20 17:15:48.734: failed to list events in namespace "volume-provisioning-4363": Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/volume-provisioning-4363/events": dial tcp 100.26.139.144:443: connect: connection refused In [DeferCleanup (Each)] at: test/e2e/framework/debug/dump.go:44 ---------- [FAILED] Jan 20 17:15:48.776: Couldn't delete ns: "volume-provisioning-4363": Delete "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/volume-provisioning-4363": dial tcp 100.26.139.144:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/volume-provisioning-4363", Err:(*net.OpError)(0xc0046b9c70)}) In [DeferCleanup (Each)] at: test/e2e/framework/framework.go:370from junit_01.xml
[BeforeEach] [sig-storage] Dynamic Provisioning set up framework | framework.go:178 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/20/23 17:14:38.762�[0m Jan 20 17:14:38.762: INFO: >>> kubeConfig: /root/.kube/config �[1mSTEP:�[0m Building a namespace api object, basename volume-provisioning �[38;5;243m01/20/23 17:14:38.763�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/20/23 17:14:38.936�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/20/23 17:14:38.997�[0m [BeforeEach] [sig-storage] Dynamic Provisioning test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-storage] Dynamic Provisioning test/e2e/storage/volume_provisioning.go:144 [It] should report an error and create no PV test/e2e/storage/volume_provisioning.go:705 �[1mSTEP:�[0m creating a StorageClass �[38;5;243m01/20/23 17:14:39.055�[0m �[1mSTEP:�[0m Creating a StorageClass �[38;5;243m01/20/23 17:14:39.056�[0m �[1mSTEP:�[0m creating a claim object with a suffix for gluster dynamic provisioner �[38;5;243m01/20/23 17:14:39.119�[0m Jan 20 17:14:39.119: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Jan 20 17:15:48.560: INFO: Unexpected error: Error waiting for PVC to fail provisioning: could not list PVC events in volume-provisioning-4363: Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/volume-provisioning-4363/events": dial tcp 100.26.139.144:443: connect: connection refused - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=625, ErrCode=NO_ERROR, debug="": <*errors.errorString | 0xc001606b00>: { s: "could not list PVC events in volume-provisioning-4363: Get \"https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/volume-provisioning-4363/events\": dial tcp 100.26.139.144:443: connect: connection refused - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=625, ErrCode=NO_ERROR, debug=\"\"", } Jan 20 17:15:48.560: FAIL: Error waiting for PVC to fail provisioning: could not list PVC events in volume-provisioning-4363: Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/volume-provisioning-4363/events": dial tcp 100.26.139.144:443: connect: connection refused - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=625, ErrCode=NO_ERROR, debug="": could not list PVC events in volume-provisioning-4363: Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/volume-provisioning-4363/events": dial tcp 100.26.139.144:443: connect: connection refused - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=625, ErrCode=NO_ERROR, debug="" Full Stack Trace k8s.io/kubernetes/test/e2e/storage.glob..func34.5.1({0x7f83e0076468, 0xc004b00dc0}) test/e2e/storage/volume_provisioning.go:765 +0x5b6 Jan 20 17:15:48.560: INFO: deleting claim "volume-provisioning-4363"/"pvc-grlcz" Jan 20 17:15:48.611: FAIL: Error deleting claim "pvc-grlcz". Error: Delete "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/volume-provisioning-4363/persistentvolumeclaims/pvc-grlcz": dial tcp 100.26.139.144:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/storage.glob..func34.5.1.1() test/e2e/storage/volume_provisioning.go:731 +0x24f panic({0x70efe60, 0xc000bc5f80}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/test/e2e/framework.Fail({0xc0037dd180, 0x31c}, {0xc004df3c68?, 0xc001f14600?, 0xc004df3c90?}) test/e2e/framework/log.go:61 +0x145 k8s.io/kubernetes/test/e2e/framework.ExpectNoErrorWithOffset(0x1, {0x7fa8f20, 0xc001606b00}, {0xc004ab88e0?, 0x0?, 0x0?}) test/e2e/framework/expect.go:76 +0x267 k8s.io/kubernetes/test/e2e/framework.ExpectNoError(...) test/e2e/framework/expect.go:43 k8s.io/kubernetes/test/e2e/storage.glob..func34.5.1({0x7f83e0076468, 0xc004b00dc0}) test/e2e/storage/volume_provisioning.go:765 +0x5b6 [AfterEach] [sig-storage] Dynamic Provisioning test/e2e/framework/node/init/init.go:32 Jan 20 17:15:48.612: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] Invalid AWS KMS key test/e2e/storage/testsuites/provisioning.go:561 Jan 20 17:15:48.651: INFO: deleting storage class volume-provisioning-4363-invalid-awsqkpzq Jan 20 17:15:48.691: INFO: Unexpected error: delete storage class: <*url.Error | 0xc003ff6660>: { Op: "Delete", URL: "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/apis/storage.k8s.io/v1/storageclasses/volume-provisioning-4363-invalid-awsqkpzq", Err: <*net.OpError | 0xc004948be0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc003afe5a0>{ IP: [100, 26, 139, 144], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc004ab8a00>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Jan 20 17:15:48.691: FAIL: delete storage class: Delete "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/apis/storage.k8s.io/v1/storageclasses/volume-provisioning-4363-invalid-awsqkpzq": dial tcp 100.26.139.144:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/storage/testsuites.SetupStorageClass.func1({0x7f83e0076468, 0xc000be37c0}) test/e2e/storage/testsuites/provisioning.go:558 +0x1b0 reflect.Value.call({0x66df5a0?, 0xc00499ace0?, 0x75bb14b?}, {0x75bb752, 0x4}, {0xc002315f60, 0x1, 0x8022ee8?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x66df5a0?, 0xc00499ace0?, 0xc000132008?}, {0xc002315f60?, 0xc002315f40?, 0x3d22b45?}) /usr/local/go/src/reflect/value.go:368 +0xbc [DeferCleanup (Each)] [sig-storage] Dynamic Provisioning test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] Dynamic Provisioning dump namespaces | framework.go:196 �[1mSTEP:�[0m dump namespace information after failure �[38;5;243m01/20/23 17:15:48.691�[0m �[1mSTEP:�[0m Collecting events from namespace "volume-provisioning-4363". �[38;5;243m01/20/23 17:15:48.691�[0m Jan 20 17:15:48.734: INFO: Unexpected error: failed to list events in namespace "volume-provisioning-4363": <*url.Error | 0xc003afe6c0>: { Op: "Get", URL: "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/volume-provisioning-4363/events", Err: <*net.OpError | 0xc0046b9540>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0021248a0>{ IP: [100, 26, 139, 144], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc00099d020>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Jan 20 17:15:48.734: FAIL: failed to list events in namespace "volume-provisioning-4363": Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/volume-provisioning-4363/events": dial tcp 100.26.139.144:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework/debug.dumpEventsInNamespace(0xc0048de5c0, {0xc004992870, 0x18}) test/e2e/framework/debug/dump.go:44 +0x191 k8s.io/kubernetes/test/e2e/framework/debug.DumpAllNamespaceInfo({0x8022ee8, 0xc0039b49c0}, {0xc004992870, 0x18}) test/e2e/framework/debug/dump.go:62 +0x8d k8s.io/kubernetes/test/e2e/framework/debug/init.init.0.func1.1(0xc0048de650?, {0xc004992870?, 0x7fac780?}) test/e2e/framework/debug/init/init.go:34 +0x32 k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo.func1() test/e2e/framework/framework.go:274 +0x6d k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo(0xc0018a23c0) test/e2e/framework/framework.go:271 +0x179 reflect.Value.call({0x662c060?, 0xc001573280?, 0xc000e15fb0?}, {0x75bb752, 0x4}, {0xae7b420, 0x0, 0xc004282d88?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x662c060?, 0xc001573280?, 0x2946afc?}, {0xae7b420?, 0xc000e15f80?, 0xc000aacba8?}) /usr/local/go/src/reflect/value.go:368 +0xbc [DeferCleanup (Each)] [sig-storage] Dynamic Provisioning tear down framework | framework.go:193 �[1mSTEP:�[0m Destroying namespace "volume-provisioning-4363" for this suite. �[38;5;243m01/20/23 17:15:48.735�[0m Jan 20 17:15:48.776: FAIL: Couldn't delete ns: "volume-provisioning-4363": Delete "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/volume-provisioning-4363": dial tcp 100.26.139.144:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/volume-provisioning-4363", Err:(*net.OpError)(0xc0046b9c70)}) Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach.func1() test/e2e/framework/framework.go:370 +0x4fe k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc0018a23c0) test/e2e/framework/framework.go:383 +0x1ca reflect.Value.call({0x662c060?, 0xc0015731c0?, 0xc001af8640?}, {0x75bb752, 0x4}, {0xae7b420, 0x0, 0x0?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x662c060?, 0xc0015731c0?, 0x0?}, {0xae7b420?, 0x5?, 0xc001af8640?}) /usr/local/go/src/reflect/value.go:368 +0xbc
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-storage\]\sIn\-tree\sVolumes\s\[Driver\:\shostPath\]\s\[Testpattern\:\sInline\-volume\s\(default\sfs\)\]\svolumes\sshould\sstore\sdata$'
test/e2e/framework/volume/fixtures.go:539 k8s.io/kubernetes/test/e2e/framework/volume.testVolumeClient(0xc000a8c2d0, {{0xc000695b50, 0xa}, {0x75c7bd6, 0x8}, {0x0, 0x0}, {0x0, 0x0, 0x0}, ...}, ...) test/e2e/framework/volume/fixtures.go:539 +0x148 k8s.io/kubernetes/test/e2e/framework/volume.TestVolumeClient(...) test/e2e/framework/volume/fixtures.go:523 k8s.io/kubernetes/test/e2e/storage/testsuites.(*volumesTestSuite).DefineTests.func3() test/e2e/storage/testsuites/volumes.go:187 +0x4ff There were additional failures detected after the initial failure: [FAILED] Jan 20 17:15:48.687: failed to list events in namespace "volume-461": Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/volume-461/events": dial tcp 100.26.139.144:443: connect: connection refused In [DeferCleanup (Each)] at: test/e2e/framework/debug/dump.go:44 ---------- [FAILED] Jan 20 17:15:48.729: Couldn't delete ns: "volume-461": Delete "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/volume-461": dial tcp 100.26.139.144:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/volume-461", Err:(*net.OpError)(0xc002f60000)}) In [DeferCleanup (Each)] at: test/e2e/framework/framework.go:370from junit_01.xml
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Inline-volume (default fs)] volumes set up framework | framework.go:178 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/20/23 17:15:15.915�[0m Jan 20 17:15:15.915: INFO: >>> kubeConfig: /root/.kube/config �[1mSTEP:�[0m Building a namespace api object, basename volume �[38;5;243m01/20/23 17:15:15.916�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/20/23 17:15:16.014�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/20/23 17:15:16.077�[0m [BeforeEach] [Testpattern: Inline-volume (default fs)] volumes test/e2e/framework/metrics/init/init.go:31 [It] should store data test/e2e/storage/testsuites/volumes.go:158 Jan 20 17:15:16.136: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics Jan 20 17:15:16.169: INFO: Creating resource for inline volume �[1mSTEP:�[0m starting hostpath-injector �[38;5;243m01/20/23 17:15:16.169�[0m Jan 20 17:15:16.208: INFO: Waiting up to 5m0s for pod "hostpath-injector" in namespace "volume-461" to be "running" Jan 20 17:15:16.240: INFO: Pod "hostpath-injector": Phase="Pending", Reason="", readiness=false. Elapsed: 32.071288ms Jan 20 17:15:18.289: INFO: Pod "hostpath-injector": Phase="Pending", Reason="", readiness=false. Elapsed: 2.080777072s Jan 20 17:15:20.272: INFO: Pod "hostpath-injector": Phase="Pending", Reason="", readiness=false. Elapsed: 4.06346421s Jan 20 17:15:22.313: INFO: Pod "hostpath-injector": Phase="Pending", Reason="", readiness=false. Elapsed: 6.104301754s Jan 20 17:15:24.272: INFO: Pod "hostpath-injector": Phase="Running", Reason="", readiness=true. Elapsed: 8.063210339s Jan 20 17:15:24.272: INFO: Pod "hostpath-injector" satisfied condition "running" �[1mSTEP:�[0m Writing text file contents in the container. �[38;5;243m01/20/23 17:15:24.272�[0m Jan 20 17:15:24.272: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/36067b0b-98e4-11ed-824d-f64c9135b4ea/kubectl --server=https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=volume-461 exec hostpath-injector --namespace=volume-461 -- /bin/sh -c echo 'Hello from hostPath from namespace volume-461' > /opt/0/index.html; sync' Jan 20 17:15:24.739: INFO: stderr: "" Jan 20 17:15:24.739: INFO: stdout: "" �[1mSTEP:�[0m Checking that text file contents are perfect. �[38;5;243m01/20/23 17:15:24.739�[0m Jan 20 17:15:24.739: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/36067b0b-98e4-11ed-824d-f64c9135b4ea/kubectl --server=https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=volume-461 exec hostpath-injector --namespace=volume-461 -- cat /opt/0/index.html' Jan 20 17:15:25.185: INFO: stderr: "" Jan 20 17:15:25.185: INFO: stdout: "Hello from hostPath from namespace volume-461\n" Jan 20 17:15:25.185: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /opt/0] Namespace:volume-461 PodName:hostpath-injector ContainerName:hostpath-injector Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 20 17:15:25.185: INFO: >>> kubeConfig: /root/.kube/config Jan 20 17:15:25.185: INFO: ExecWithOptions: Clientset creation Jan 20 17:15:25.185: INFO: ExecWithOptions: execute(POST https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/volume-461/pods/hostpath-injector/exec?command=%2Fbin%2Fsh&command=-c&command=test+-d+%2Fopt%2F0&container=hostpath-injector&container=hostpath-injector&stderr=true&stdout=true) Jan 20 17:15:25.443: INFO: ExecWithOptions {Command:[/bin/sh -c test -b /opt/0] Namespace:volume-461 PodName:hostpath-injector ContainerName:hostpath-injector Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 20 17:15:25.443: INFO: >>> kubeConfig: /root/.kube/config Jan 20 17:15:25.444: INFO: ExecWithOptions: Clientset creation Jan 20 17:15:25.444: INFO: ExecWithOptions: execute(POST https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/volume-461/pods/hostpath-injector/exec?command=%2Fbin%2Fsh&command=-c&command=test+-b+%2Fopt%2F0&container=hostpath-injector&container=hostpath-injector&stderr=true&stdout=true) �[1mSTEP:�[0m Deleting pod hostpath-injector in namespace volume-461 �[38;5;243m01/20/23 17:15:25.718�[0m Jan 20 17:15:25.755: INFO: Waiting for pod hostpath-injector to disappear Jan 20 17:15:25.785: INFO: Pod hostpath-injector still exists Jan 20 17:15:27.786: INFO: Waiting for pod hostpath-injector to disappear Jan 20 17:15:48.566: INFO: Encountered non-retryable error while listing pods: Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/volume-461/pods": dial tcp 100.26.139.144:443: connect: connection refused - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=295, ErrCode=NO_ERROR, debug="" �[1mSTEP:�[0m starting hostpath-client �[38;5;243m01/20/23 17:15:48.566�[0m Jan 20 17:15:48.607: FAIL: Failed to create client pod: Post "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/volume-461/pods": dial tcp 100.26.139.144:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework/volume.testVolumeClient(0xc000a8c2d0, {{0xc000695b50, 0xa}, {0x75c7bd6, 0x8}, {0x0, 0x0}, {0x0, 0x0, 0x0}, ...}, ...) test/e2e/framework/volume/fixtures.go:539 +0x148 k8s.io/kubernetes/test/e2e/framework/volume.TestVolumeClient(...) test/e2e/framework/volume/fixtures.go:523 k8s.io/kubernetes/test/e2e/storage/testsuites.(*volumesTestSuite).DefineTests.func3() test/e2e/storage/testsuites/volumes.go:187 +0x4ff �[1mSTEP:�[0m cleaning the environment after hostpath �[38;5;243m01/20/23 17:15:48.607�[0m [AfterEach] [Testpattern: Inline-volume (default fs)] volumes test/e2e/framework/node/init/init.go:32 Jan 20 17:15:48.607: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [Testpattern: Inline-volume (default fs)] volumes test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [Testpattern: Inline-volume (default fs)] volumes dump namespaces | framework.go:196 �[1mSTEP:�[0m dump namespace information after failure �[38;5;243m01/20/23 17:15:48.65�[0m �[1mSTEP:�[0m Collecting events from namespace "volume-461". �[38;5;243m01/20/23 17:15:48.65�[0m Jan 20 17:15:48.687: INFO: Unexpected error: failed to list events in namespace "volume-461": <*url.Error | 0xc003a58a80>: { Op: "Get", URL: "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/volume-461/events", Err: <*net.OpError | 0xc003475b80>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0034977d0>{ IP: [100, 26, 139, 144], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc000f40fc0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Jan 20 17:15:48.687: FAIL: failed to list events in namespace "volume-461": Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/volume-461/events": dial tcp 100.26.139.144:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework/debug.dumpEventsInNamespace(0xc00473e5c0, {0xc000695b50, 0xa}) test/e2e/framework/debug/dump.go:44 +0x191 k8s.io/kubernetes/test/e2e/framework/debug.DumpAllNamespaceInfo({0x8022ee8, 0xc0018cf040}, {0xc000695b50, 0xa}) test/e2e/framework/debug/dump.go:62 +0x8d k8s.io/kubernetes/test/e2e/framework/debug/init.init.0.func1.1(0xc00473e650?, {0xc000695b50?, 0x7fac780?}) test/e2e/framework/debug/init/init.go:34 +0x32 k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo.func1() test/e2e/framework/framework.go:274 +0x6d k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo(0xc000a8c2d0) test/e2e/framework/framework.go:271 +0x179 reflect.Value.call({0x662c060?, 0xc001425b50?, 0xc003462fb0?}, {0x75bb752, 0x4}, {0xae7b420, 0x0, 0xc002c7c8a8?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x662c060?, 0xc001425b50?, 0x2946afc?}, {0xae7b420?, 0xc003462f80?, 0xc00347d400?}) /usr/local/go/src/reflect/value.go:368 +0xbc [DeferCleanup (Each)] [Testpattern: Inline-volume (default fs)] volumes tear down framework | framework.go:193 �[1mSTEP:�[0m Destroying namespace "volume-461" for this suite. �[38;5;243m01/20/23 17:15:48.688�[0m Jan 20 17:15:48.729: FAIL: Couldn't delete ns: "volume-461": Delete "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/volume-461": dial tcp 100.26.139.144:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/volume-461", Err:(*net.OpError)(0xc002f60000)}) Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach.func1() test/e2e/framework/framework.go:370 +0x4fe k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc000a8c2d0) test/e2e/framework/framework.go:383 +0x1ca reflect.Value.call({0x662c060?, 0xc001425ab0?, 0x750e8a0?}, {0x75bb752, 0x4}, {0xae7b420, 0x0, 0x7fc3478?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x662c060?, 0xc001425ab0?, 0x269f705?}, {0xae7b420?, 0xc0005e7f80?, 0xc0005e7f70?}) /usr/local/go/src/reflect/value.go:368 +0xbc
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-storage\]\sIn\-tree\sVolumes\s\[Driver\:\slocal\]\[LocalVolumeType\:\sblock\]\s\[Testpattern\:\sPre\-provisioned\sPV\s\(filesystem\svolmode\)\]\svolumeMode\sshould\snot\smount\s\/\smap\sunused\svolumes\sin\sa\spod\s\[LinuxOnly\]$'
test/e2e/storage/testsuites/volumemode.go:383 k8s.io/kubernetes/test/e2e/storage/testsuites.(*volumeModeTestSuite).DefineTests.func7.1() test/e2e/storage/testsuites/volumemode.go:383 +0x45 k8s.io/kubernetes/test/e2e/storage/testsuites.(*volumeModeTestSuite).DefineTests.func7() test/e2e/storage/testsuites/volumemode.go:416 +0x9e8 There were additional failures detected after the initial failure: [FAILED] Jan 20 17:15:48.740: failed to delete pod hostexec-i-03af3dbca738ba168-q6k7b in namespace volumemode-6638 Unexpected error: <*url.Error | 0xc001244570>: { Op: "Delete", URL: "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/volumemode-6638/pods/hostexec-i-03af3dbca738ba168-q6k7b", Err: <*net.OpError | 0xc00366ee10>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc00141e4b0>{ IP: [100, 26, 139, 144], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc0015147e0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Delete "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/volumemode-6638/pods/hostexec-i-03af3dbca738ba168-q6k7b": dial tcp 100.26.139.144:443: connect: connection refused occurred In [DeferCleanup (Each)] at: test/e2e/framework/pod/delete.go:47 ---------- [FAILED] Jan 20 17:15:48.784: failed to list events in namespace "volumemode-6638": Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/volumemode-6638/events": dial tcp 100.26.139.144:443: connect: connection refused In [DeferCleanup (Each)] at: test/e2e/framework/debug/dump.go:44 ---------- [FAILED] Jan 20 17:15:48.823: Couldn't delete ns: "volumemode-6638": Delete "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/volumemode-6638": dial tcp 100.26.139.144:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/volumemode-6638", Err:(*net.OpError)(0xc002544690)}) In [DeferCleanup (Each)] at: test/e2e/framework/framework.go:370from junit_01.xml
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode set up framework | framework.go:178 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/20/23 17:14:55.627�[0m Jan 20 17:14:55.627: INFO: >>> kubeConfig: /root/.kube/config �[1mSTEP:�[0m Building a namespace api object, basename volumemode �[38;5;243m01/20/23 17:14:55.628�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/20/23 17:14:55.741�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/20/23 17:14:55.801�[0m [BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode test/e2e/framework/metrics/init/init.go:31 [It] should not mount / map unused volumes in a pod [LinuxOnly] test/e2e/storage/testsuites/volumemode.go:354 Jan 20 17:14:55.898: INFO: In-tree plugin kubernetes.io/local-volume is not migrated, not validating any metrics �[1mSTEP:�[0m Creating block device on node "i-03af3dbca738ba168" using path "/tmp/local-driver-3c0097db-63fa-4461-a347-153564e4e244" �[38;5;243m01/20/23 17:14:55.898�[0m Jan 20 17:14:55.942: INFO: Waiting up to 5m0s for pod "hostexec-i-03af3dbca738ba168-q6k7b" in namespace "volumemode-6638" to be "running" Jan 20 17:14:55.973: INFO: Pod "hostexec-i-03af3dbca738ba168-q6k7b": Phase="Pending", Reason="", readiness=false. Elapsed: 30.749317ms Jan 20 17:14:58.012: INFO: Pod "hostexec-i-03af3dbca738ba168-q6k7b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.06922779s Jan 20 17:15:00.005: INFO: Pod "hostexec-i-03af3dbca738ba168-q6k7b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.06239758s Jan 20 17:15:02.005: INFO: Pod "hostexec-i-03af3dbca738ba168-q6k7b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.062648834s Jan 20 17:15:04.016: INFO: Pod "hostexec-i-03af3dbca738ba168-q6k7b": Phase="Running", Reason="", readiness=true. Elapsed: 8.074060203s Jan 20 17:15:04.016: INFO: Pod "hostexec-i-03af3dbca738ba168-q6k7b" satisfied condition "running" Jan 20 17:15:04.016: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-driver-3c0097db-63fa-4461-a347-153564e4e244 && dd if=/dev/zero of=/tmp/local-driver-3c0097db-63fa-4461-a347-153564e4e244/file bs=4096 count=5120 && losetup -f /tmp/local-driver-3c0097db-63fa-4461-a347-153564e4e244/file] Namespace:volumemode-6638 PodName:hostexec-i-03af3dbca738ba168-q6k7b ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jan 20 17:15:04.016: INFO: >>> kubeConfig: /root/.kube/config Jan 20 17:15:04.017: INFO: ExecWithOptions: Clientset creation Jan 20 17:15:04.017: INFO: ExecWithOptions: execute(POST https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/volumemode-6638/pods/hostexec-i-03af3dbca738ba168-q6k7b/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=mkdir+-p+%2Ftmp%2Flocal-driver-3c0097db-63fa-4461-a347-153564e4e244+%26%26+dd+if%3D%2Fdev%2Fzero+of%3D%2Ftmp%2Flocal-driver-3c0097db-63fa-4461-a347-153564e4e244%2Ffile+bs%3D4096+count%3D5120+%26%26+losetup+-f+%2Ftmp%2Flocal-driver-3c0097db-63fa-4461-a347-153564e4e244%2Ffile&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) Jan 20 17:15:04.685: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-driver-3c0097db-63fa-4461-a347-153564e4e244/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:volumemode-6638 PodName:hostexec-i-03af3dbca738ba168-q6k7b ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jan 20 17:15:04.685: INFO: >>> kubeConfig: /root/.kube/config Jan 20 17:15:04.685: INFO: ExecWithOptions: Clientset creation Jan 20 17:15:04.686: INFO: ExecWithOptions: execute(POST https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/volumemode-6638/pods/hostexec-i-03af3dbca738ba168-q6k7b/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=E2E_LOOP_DEV%3D%24%28losetup+%7C+grep+%2Ftmp%2Flocal-driver-3c0097db-63fa-4461-a347-153564e4e244%2Ffile+%7C+awk+%27%7B+print+%241+%7D%27%29+2%3E%261+%3E+%2Fdev%2Fnull+%26%26+echo+%24%7BE2E_LOOP_DEV%7D&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) Jan 20 17:15:05.293: INFO: Creating resource for pre-provisioned PV Jan 20 17:15:05.293: INFO: Creating PVC and PV �[1mSTEP:�[0m Creating a PVC followed by a PV �[38;5;243m01/20/23 17:15:05.293�[0m Jan 20 17:15:05.365: INFO: Waiting for PV local-htf4f to bind to PVC pvc-cg2gj Jan 20 17:15:05.365: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-cg2gj] to have phase Bound Jan 20 17:15:05.399: INFO: PersistentVolumeClaim pvc-cg2gj found but phase is Pending instead of Bound. Jan 20 17:15:07.435: INFO: PersistentVolumeClaim pvc-cg2gj found but phase is Pending instead of Bound. Jan 20 17:15:09.498: INFO: PersistentVolumeClaim pvc-cg2gj found but phase is Pending instead of Bound. Jan 20 17:15:11.530: INFO: PersistentVolumeClaim pvc-cg2gj found but phase is Pending instead of Bound. Jan 20 17:15:13.563: INFO: PersistentVolumeClaim pvc-cg2gj found and phase=Bound (8.19742572s) Jan 20 17:15:13.563: INFO: Waiting up to 3m0s for PersistentVolume local-htf4f to have phase Bound Jan 20 17:15:13.594: INFO: PersistentVolume local-htf4f found and phase=Bound (31.319497ms) �[1mSTEP:�[0m Creating pod �[38;5;243m01/20/23 17:15:13.658�[0m Jan 20 17:15:13.692: INFO: Waiting up to 5m0s for pod "pod-c55fff97-a9f6-496d-9536-818a403fd4a1" in namespace "volumemode-6638" to be "running" Jan 20 17:15:13.723: INFO: Pod "pod-c55fff97-a9f6-496d-9536-818a403fd4a1": Phase="Pending", Reason="", readiness=false. Elapsed: 31.057153ms Jan 20 17:15:15.756: INFO: Pod "pod-c55fff97-a9f6-496d-9536-818a403fd4a1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.06396481s Jan 20 17:15:17.755: INFO: Pod "pod-c55fff97-a9f6-496d-9536-818a403fd4a1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.063482396s Jan 20 17:15:19.756: INFO: Pod "pod-c55fff97-a9f6-496d-9536-818a403fd4a1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.064492253s Jan 20 17:15:21.755: INFO: Pod "pod-c55fff97-a9f6-496d-9536-818a403fd4a1": Phase="Running", Reason="", readiness=true. Elapsed: 8.062807241s Jan 20 17:15:21.755: INFO: Pod "pod-c55fff97-a9f6-496d-9536-818a403fd4a1" satisfied condition "running" �[1mSTEP:�[0m Listing mounted volumes in the pod �[38;5;243m01/20/23 17:15:21.817�[0m Jan 20 17:15:21.849: INFO: Waiting up to 5m0s for pod "hostexec-i-03af3dbca738ba168-hr9nb" in namespace "volumemode-6638" to be "running" Jan 20 17:15:21.881: INFO: Pod "hostexec-i-03af3dbca738ba168-hr9nb": Phase="Pending", Reason="", readiness=false. Elapsed: 31.287232ms Jan 20 17:15:23.915: INFO: Pod "hostexec-i-03af3dbca738ba168-hr9nb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.06533141s Jan 20 17:15:25.929: INFO: Pod "hostexec-i-03af3dbca738ba168-hr9nb": Phase="Running", Reason="", readiness=true. Elapsed: 4.079740676s Jan 20 17:15:25.929: INFO: Pod "hostexec-i-03af3dbca738ba168-hr9nb" satisfied condition "running" Jan 20 17:15:25.929: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c test ! -d /var/lib/kubelet/pods/252fb1fe-6028-45f7-a2d9-0155c946aa3d/volumes] Namespace:volumemode-6638 PodName:hostexec-i-03af3dbca738ba168-hr9nb ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jan 20 17:15:25.929: INFO: >>> kubeConfig: /root/.kube/config Jan 20 17:15:25.930: INFO: ExecWithOptions: Clientset creation Jan 20 17:15:25.930: INFO: ExecWithOptions: execute(POST https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/volumemode-6638/pods/hostexec-i-03af3dbca738ba168-hr9nb/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=test+%21+-d+%2Fvar%2Flib%2Fkubelet%2Fpods%2F252fb1fe-6028-45f7-a2d9-0155c946aa3d%2Fvolumes&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) Jan 20 17:15:26.345: INFO: exec i-03af3dbca738ba168: command: test ! -d /var/lib/kubelet/pods/252fb1fe-6028-45f7-a2d9-0155c946aa3d/volumes Jan 20 17:15:26.345: INFO: exec i-03af3dbca738ba168: stdout: "" Jan 20 17:15:26.345: INFO: exec i-03af3dbca738ba168: stderr: "" Jan 20 17:15:26.345: INFO: exec i-03af3dbca738ba168: exit code: 0 Jan 20 17:15:26.345: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c find /var/lib/kubelet/pods/252fb1fe-6028-45f7-a2d9-0155c946aa3d/volumes -mindepth 2 -maxdepth 2] Namespace:volumemode-6638 PodName:hostexec-i-03af3dbca738ba168-hr9nb ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jan 20 17:15:26.345: INFO: >>> kubeConfig: /root/.kube/config Jan 20 17:15:26.346: INFO: ExecWithOptions: Clientset creation Jan 20 17:15:26.346: INFO: ExecWithOptions: execute(POST https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/volumemode-6638/pods/hostexec-i-03af3dbca738ba168-hr9nb/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=find+%2Fvar%2Flib%2Fkubelet%2Fpods%2F252fb1fe-6028-45f7-a2d9-0155c946aa3d%2Fvolumes+-mindepth+2+-maxdepth+2&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) Jan 20 17:15:26.615: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c test ! -d /var/lib/kubelet/pods/252fb1fe-6028-45f7-a2d9-0155c946aa3d/volumeDevices] Namespace:volumemode-6638 PodName:hostexec-i-03af3dbca738ba168-hr9nb ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jan 20 17:15:26.615: INFO: >>> kubeConfig: /root/.kube/config Jan 20 17:15:26.615: INFO: ExecWithOptions: Clientset creation Jan 20 17:15:26.616: INFO: ExecWithOptions: execute(POST https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/volumemode-6638/pods/hostexec-i-03af3dbca738ba168-hr9nb/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=test+%21+-d+%2Fvar%2Flib%2Fkubelet%2Fpods%2F252fb1fe-6028-45f7-a2d9-0155c946aa3d%2FvolumeDevices&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) �[1mSTEP:�[0m Checking that volume plugin kubernetes.io/local-volume is not used in pod directory �[38;5;243m01/20/23 17:15:26.897�[0m �[1mSTEP:�[0m Deleting pod hostexec-i-03af3dbca738ba168-hr9nb in namespace volumemode-6638 �[38;5;243m01/20/23 17:15:26.897�[0m Jan 20 17:15:26.936: INFO: Deleting pod "pod-c55fff97-a9f6-496d-9536-818a403fd4a1" in namespace "volumemode-6638" Jan 20 17:15:26.978: INFO: Wait up to 5m0s for pod "pod-c55fff97-a9f6-496d-9536-818a403fd4a1" to be fully deleted Jan 20 17:15:48.564: INFO: Encountered non-retryable error while getting pod volumemode-6638/pod-c55fff97-a9f6-496d-9536-818a403fd4a1: Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/volumemode-6638/pods/pod-c55fff97-a9f6-496d-9536-818a403fd4a1": dial tcp 100.26.139.144:443: connect: connection refused - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=273, ErrCode=NO_ERROR, debug="" Jan 20 17:15:48.564: INFO: Unexpected error: <*errors.errorString | 0xc000dc6820>: { s: "pod \"pod-c55fff97-a9f6-496d-9536-818a403fd4a1\" was not deleted: error while waiting for pod volumemode-6638/pod-c55fff97-a9f6-496d-9536-818a403fd4a1 not found: Get \"https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/volumemode-6638/pods/pod-c55fff97-a9f6-496d-9536-818a403fd4a1\": dial tcp 100.26.139.144:443: connect: connection refused - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=273, ErrCode=NO_ERROR, debug=\"\"", } Jan 20 17:15:48.564: FAIL: pod "pod-c55fff97-a9f6-496d-9536-818a403fd4a1" was not deleted: error while waiting for pod volumemode-6638/pod-c55fff97-a9f6-496d-9536-818a403fd4a1 not found: Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/volumemode-6638/pods/pod-c55fff97-a9f6-496d-9536-818a403fd4a1": dial tcp 100.26.139.144:443: connect: connection refused - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=273, ErrCode=NO_ERROR, debug="" Full Stack Trace k8s.io/kubernetes/test/e2e/storage/testsuites.(*volumeModeTestSuite).DefineTests.func7.1() test/e2e/storage/testsuites/volumemode.go:383 +0x45 k8s.io/kubernetes/test/e2e/storage/testsuites.(*volumeModeTestSuite).DefineTests.func7() test/e2e/storage/testsuites/volumemode.go:416 +0x9e8 �[1mSTEP:�[0m Deleting pv and pvc �[38;5;243m01/20/23 17:15:48.564�[0m Jan 20 17:15:48.564: INFO: Deleting PersistentVolumeClaim "pvc-cg2gj" Jan 20 17:15:48.603: INFO: Deleting PersistentVolume "local-htf4f" Jan 20 17:15:48.645: FAIL: Failed to delete PVC or PV: [failed to delete PVC "pvc-cg2gj": PVC Delete API error: Delete "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/volumemode-6638/persistentvolumeclaims/pvc-cg2gj": dial tcp 100.26.139.144:443: connect: connection refused, failed to delete PV "local-htf4f": PV Delete API error: Delete "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/persistentvolumes/local-htf4f": dial tcp 100.26.139.144:443: connect: connection refused] Full Stack Trace k8s.io/kubernetes/test/e2e/storage/framework.(*VolumeResource).CleanupResource(0xc00165b1f8) test/e2e/storage/framework/volume_resource.go:178 +0x1012 k8s.io/kubernetes/test/e2e/storage/testsuites.(*volumeModeTestSuite).DefineTests.func3() test/e2e/storage/testsuites/volumemode.go:187 +0x3e panic({0x70efe60, 0xc000a76af0}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/test/e2e/framework.Fail({0xc0008e6600, 0x1fe}, {0xc003cc3b18?, 0xc0008e6600?, 0xc003cc3b40?}) test/e2e/framework/log.go:61 +0x145 k8s.io/kubernetes/test/e2e/framework.ExpectNoErrorWithOffset(0x1, {0x7fa8f20, 0xc000dc6820}, {0x0?, 0x6206c66?, 0xc000de0770?}) test/e2e/framework/expect.go:76 +0x267 k8s.io/kubernetes/test/e2e/framework.ExpectNoError(...) test/e2e/framework/expect.go:43 k8s.io/kubernetes/test/e2e/storage/testsuites.(*volumeModeTestSuite).DefineTests.func7.1() test/e2e/storage/testsuites/volumemode.go:383 +0x45 k8s.io/kubernetes/test/e2e/storage/testsuites.(*volumeModeTestSuite).DefineTests.func7() test/e2e/storage/testsuites/volumemode.go:416 +0x9e8 [AfterEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode test/e2e/framework/node/init/init.go:32 Jan 20 17:15:48.646: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode test/e2e/storage/drivers/in_tree.go:1734 �[1mSTEP:�[0m Deleting pod hostexec-i-03af3dbca738ba168-q6k7b in namespace volumemode-6638 �[38;5;243m01/20/23 17:15:48.689�[0m Jan 20 17:15:48.739: INFO: Unexpected error occurred: Delete "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/volumemode-6638/pods/hostexec-i-03af3dbca738ba168-q6k7b": dial tcp 100.26.139.144:443: connect: connection refused Jan 20 17:15:48.739: FAIL: failed to delete pod hostexec-i-03af3dbca738ba168-q6k7b in namespace volumemode-6638 Unexpected error: <*url.Error | 0xc001244570>: { Op: "Delete", URL: "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/volumemode-6638/pods/hostexec-i-03af3dbca738ba168-q6k7b", Err: <*net.OpError | 0xc00366ee10>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc00141e4b0>{ IP: [100, 26, 139, 144], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc0015147e0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Delete "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/volumemode-6638/pods/hostexec-i-03af3dbca738ba168-q6k7b": dial tcp 100.26.139.144:443: connect: connection refused occurred Full Stack Trace k8s.io/kubernetes/test/e2e/framework/pod.DeletePodOrFail({0x8022ee8, 0xc003686000}, {0xc0037925d0, 0xf}, {0xc00379ca20, 0x22}) test/e2e/framework/pod/delete.go:47 +0x270 k8s.io/kubernetes/test/e2e/storage/utils.(*hostExecutor).Cleanup(0xc0006b3b70) test/e2e/storage/utils/host_exec.go:187 +0x97 reflect.Value.call({0x662c060?, 0xc000c408b8?, 0xc001e6b3b0?}, {0x75bb752, 0x4}, {0xae7b420, 0x0, 0x0?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x662c060?, 0xc000c408b8?, 0x2946afc?}, {0xae7b420?, 0xc003c87f80?, 0x0?}) /usr/local/go/src/reflect/value.go:368 +0xbc [DeferCleanup (Each)] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode dump namespaces | framework.go:196 �[1mSTEP:�[0m dump namespace information after failure �[38;5;243m01/20/23 17:15:48.74�[0m �[1mSTEP:�[0m Collecting events from namespace "volumemode-6638". �[38;5;243m01/20/23 17:15:48.74�[0m Jan 20 17:15:48.784: INFO: Unexpected error: failed to list events in namespace "volumemode-6638": <*url.Error | 0xc00141e6f0>: { Op: "Get", URL: "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/volumemode-6638/events", Err: <*net.OpError | 0xc002544410>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc001244e10>{ IP: [100, 26, 139, 144], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc000bc2580>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } Jan 20 17:15:48.784: FAIL: failed to list events in namespace "volumemode-6638": Get "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/volumemode-6638/events": dial tcp 100.26.139.144:443: connect: connection refused Full Stack Trace k8s.io/kubernetes/test/e2e/framework/debug.dumpEventsInNamespace(0xc003cc25c0, {0xc0034e8eb0, 0xf}) test/e2e/framework/debug/dump.go:44 +0x191 k8s.io/kubernetes/test/e2e/framework/debug.DumpAllNamespaceInfo({0x8022ee8, 0xc003686000}, {0xc0034e8eb0, 0xf}) test/e2e/framework/debug/dump.go:62 +0x8d k8s.io/kubernetes/test/e2e/framework/debug/init.init.0.func1.1(0xc003cc2650?, {0xc0034e8eb0?, 0x7fac780?}) test/e2e/framework/debug/init/init.go:34 +0x32 k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo.func1() test/e2e/framework/framework.go:274 +0x6d k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo(0xc00165b2c0) test/e2e/framework/framework.go:271 +0x179 reflect.Value.call({0x662c060?, 0xc0010d3910?, 0xc00417cfb0?}, {0x75bb752, 0x4}, {0xae7b420, 0x0, 0xc0036870c8?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x662c060?, 0xc0010d3910?, 0x2946afc?}, {0xae7b420?, 0xc00417cf80?, 0xc00343d920?}) /usr/local/go/src/reflect/value.go:368 +0xbc [DeferCleanup (Each)] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode tear down framework | framework.go:193 �[1mSTEP:�[0m Destroying namespace "volumemode-6638" for this suite. �[38;5;243m01/20/23 17:15:48.784�[0m Jan 20 17:15:48.823: FAIL: Couldn't delete ns: "volumemode-6638": Delete "https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/volumemode-6638": dial tcp 100.26.139.144:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/volumemode-6638", Err:(*net.OpError)(0xc002544690)}) Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach.func1() test/e2e/framework/framework.go:370 +0x4fe k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc00165b2c0) test/e2e/framework/framework.go:383 +0x1ca reflect.Value.call({0x662c060?, 0xc0010d3840?, 0x0?}, {0x75bb752, 0x4}, {0xae7b420, 0x0, 0x0?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x662c060?, 0xc0010d3840?, 0x0?}, {0xae7b420?, 0x0?, 0x0?}) /usr/local/go/src/reflect/value.go:368 +0xbc
Find hostexec-i-03af3dbca738ba168-q6k7b mentions in log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-storage\]\sIn\-tree\sVolumes\s\[Driver\:\slocal\]\[LocalVolumeType\:\sblockfs\]\s\[Testpattern\:\sPre\-provisioned\sPV\s\(default\sfs\)\]\svolumes\sshould\sstore\sdata$'
test/e2e/framework/pod/pod_client.go:173 k8s.io/kubernetes/test/e2e/framework/pod.(*PodClient).AddEphemeralContainerSync(0xc001973cc8, 0xc0027d7680, 0xc0039079c0, 0x0?) test/e2e/framework/pod/pod_client.go:173 +0x63c k8s.io/kubernetes/test/e2e/framework/volume.testVolumeClient(0xc0017d1c20, {{0xc000fa9570, 0xb}, {0x75bdc31, 0x5}, {0x0, 0x0}, {0x0, 0x0, 0x0}, ...}, ...) test/e2e/framework/volume/fixtures.go:554 +0x2d6 k8s.io/kubernetes/test/e2e/framework/volume.TestVolumeClient(...) test/e2e/framework/volume/fixtures.go:523 k8s.io/kubernetes/test/e2e/storage/testsuites.(*volumesTestSuite).DefineTests.func3() test/e2e/storage/testsuites/volumes.go:187 +0x4fffrom junit_01.xml
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes set up framework | framework.go:178 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/20/23 17:17:12.245�[0m Jan 20 17:17:12.246: INFO: >>> kubeConfig: /root/.kube/config �[1mSTEP:�[0m Building a namespace api object, basename volume �[38;5;243m01/20/23 17:17:12.246�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/20/23 17:17:12.345�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/20/23 17:17:12.407�[0m [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes test/e2e/framework/metrics/init/init.go:31 [It] should store data test/e2e/storage/testsuites/volumes.go:158 Jan 20 17:17:12.504: INFO: In-tree plugin kubernetes.io/local-volume is not migrated, not validating any metrics �[1mSTEP:�[0m Creating block device on node "i-03af3dbca738ba168" using path "/tmp/local-driver-f8449929-1601-4c82-8fb6-bb4689577a15" �[38;5;243m01/20/23 17:17:12.504�[0m Jan 20 17:17:12.538: INFO: Waiting up to 5m0s for pod "hostexec-i-03af3dbca738ba168-48rhp" in namespace "volume-6475" to be "running" Jan 20 17:17:12.569: INFO: Pod "hostexec-i-03af3dbca738ba168-48rhp": Phase="Pending", Reason="", readiness=false. Elapsed: 30.774671ms Jan 20 17:17:14.601: INFO: Pod "hostexec-i-03af3dbca738ba168-48rhp": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063180784s Jan 20 17:17:16.607: INFO: Pod "hostexec-i-03af3dbca738ba168-48rhp": Phase="Pending", Reason="", readiness=false. Elapsed: 4.068768014s Jan 20 17:17:18.601: INFO: Pod "hostexec-i-03af3dbca738ba168-48rhp": Phase="Pending", Reason="", readiness=false. Elapsed: 6.063092746s Jan 20 17:17:20.600: INFO: Pod "hostexec-i-03af3dbca738ba168-48rhp": Phase="Pending", Reason="", readiness=false. Elapsed: 8.062070383s Jan 20 17:17:22.600: INFO: Pod "hostexec-i-03af3dbca738ba168-48rhp": Phase="Pending", Reason="", readiness=false. Elapsed: 10.061896345s Jan 20 17:17:24.601: INFO: Pod "hostexec-i-03af3dbca738ba168-48rhp": Phase="Running", Reason="", readiness=true. Elapsed: 12.063140116s Jan 20 17:17:24.601: INFO: Pod "hostexec-i-03af3dbca738ba168-48rhp" satisfied condition "running" Jan 20 17:17:24.601: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-driver-f8449929-1601-4c82-8fb6-bb4689577a15 && dd if=/dev/zero of=/tmp/local-driver-f8449929-1601-4c82-8fb6-bb4689577a15/file bs=4096 count=5120 && losetup -f /tmp/local-driver-f8449929-1601-4c82-8fb6-bb4689577a15/file] Namespace:volume-6475 PodName:hostexec-i-03af3dbca738ba168-48rhp ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jan 20 17:17:24.601: INFO: >>> kubeConfig: /root/.kube/config Jan 20 17:17:24.602: INFO: ExecWithOptions: Clientset creation Jan 20 17:17:24.602: INFO: ExecWithOptions: execute(POST https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/volume-6475/pods/hostexec-i-03af3dbca738ba168-48rhp/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=mkdir+-p+%2Ftmp%2Flocal-driver-f8449929-1601-4c82-8fb6-bb4689577a15+%26%26+dd+if%3D%2Fdev%2Fzero+of%3D%2Ftmp%2Flocal-driver-f8449929-1601-4c82-8fb6-bb4689577a15%2Ffile+bs%3D4096+count%3D5120+%26%26+losetup+-f+%2Ftmp%2Flocal-driver-f8449929-1601-4c82-8fb6-bb4689577a15%2Ffile&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) Jan 20 17:17:24.900: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-driver-f8449929-1601-4c82-8fb6-bb4689577a15/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:volume-6475 PodName:hostexec-i-03af3dbca738ba168-48rhp ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jan 20 17:17:24.900: INFO: >>> kubeConfig: /root/.kube/config Jan 20 17:17:24.901: INFO: ExecWithOptions: Clientset creation Jan 20 17:17:24.901: INFO: ExecWithOptions: execute(POST https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/volume-6475/pods/hostexec-i-03af3dbca738ba168-48rhp/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=E2E_LOOP_DEV%3D%24%28losetup+%7C+grep+%2Ftmp%2Flocal-driver-f8449929-1601-4c82-8fb6-bb4689577a15%2Ffile+%7C+awk+%27%7B+print+%241+%7D%27%29+2%3E%261+%3E+%2Fdev%2Fnull+%26%26+echo+%24%7BE2E_LOOP_DEV%7D&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) Jan 20 17:17:25.168: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkfs -t ext4 /dev/loop1 && mount -t ext4 /dev/loop1 /tmp/local-driver-f8449929-1601-4c82-8fb6-bb4689577a15 && chmod o+rwx /tmp/local-driver-f8449929-1601-4c82-8fb6-bb4689577a15] Namespace:volume-6475 PodName:hostexec-i-03af3dbca738ba168-48rhp ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jan 20 17:17:25.168: INFO: >>> kubeConfig: /root/.kube/config Jan 20 17:17:25.169: INFO: ExecWithOptions: Clientset creation Jan 20 17:17:25.169: INFO: ExecWithOptions: execute(POST https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/volume-6475/pods/hostexec-i-03af3dbca738ba168-48rhp/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=mkfs+-t+ext4+%2Fdev%2Floop1+%26%26+mount+-t+ext4+%2Fdev%2Floop1+%2Ftmp%2Flocal-driver-f8449929-1601-4c82-8fb6-bb4689577a15+%26%26+chmod+o%2Brwx+%2Ftmp%2Flocal-driver-f8449929-1601-4c82-8fb6-bb4689577a15&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) Jan 20 17:17:25.514: INFO: Creating resource for pre-provisioned PV Jan 20 17:17:25.514: INFO: Creating PVC and PV �[1mSTEP:�[0m Creating a PVC followed by a PV �[38;5;243m01/20/23 17:17:25.514�[0m Jan 20 17:17:25.580: INFO: Waiting for PV local-2tknc to bind to PVC pvc-k99sp Jan 20 17:17:25.580: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-k99sp] to have phase Bound Jan 20 17:17:25.611: INFO: PersistentVolumeClaim pvc-k99sp found but phase is Pending instead of Bound. Jan 20 17:17:27.643: INFO: PersistentVolumeClaim pvc-k99sp found but phase is Pending instead of Bound. Jan 20 17:17:29.676: INFO: PersistentVolumeClaim pvc-k99sp found but phase is Pending instead of Bound. Jan 20 17:17:31.708: INFO: PersistentVolumeClaim pvc-k99sp found but phase is Pending instead of Bound. Jan 20 17:17:33.740: INFO: PersistentVolumeClaim pvc-k99sp found but phase is Pending instead of Bound. Jan 20 17:17:35.772: INFO: PersistentVolumeClaim pvc-k99sp found but phase is Pending instead of Bound. Jan 20 17:17:37.804: INFO: PersistentVolumeClaim pvc-k99sp found but phase is Pending instead of Bound. Jan 20 17:17:39.835: INFO: PersistentVolumeClaim pvc-k99sp found and phase=Bound (14.255243818s) Jan 20 17:17:39.835: INFO: Waiting up to 3m0s for PersistentVolume local-2tknc to have phase Bound Jan 20 17:17:39.866: INFO: PersistentVolume local-2tknc found and phase=Bound (30.764267ms) �[1mSTEP:�[0m starting local-injector �[38;5;243m01/20/23 17:17:39.931�[0m Jan 20 17:17:39.963: INFO: Waiting up to 5m0s for pod "local-injector" in namespace "volume-6475" to be "running" Jan 20 17:17:39.994: INFO: Pod "local-injector": Phase="Pending", Reason="", readiness=false. Elapsed: 31.186546ms Jan 20 17:17:42.026: INFO: Pod "local-injector": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063016188s Jan 20 17:17:44.026: INFO: Pod "local-injector": Phase="Running", Reason="", readiness=true. Elapsed: 4.063387492s Jan 20 17:17:44.026: INFO: Pod "local-injector" satisfied condition "running" �[1mSTEP:�[0m Writing text file contents in the container. �[38;5;243m01/20/23 17:17:44.027�[0m Jan 20 17:17:44.027: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/36067b0b-98e4-11ed-824d-f64c9135b4ea/kubectl --server=https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=volume-6475 exec local-injector --namespace=volume-6475 -- /bin/sh -c echo 'Hello from local from namespace volume-6475' > /opt/0/index.html; sync' Jan 20 17:17:44.612: INFO: stderr: "" Jan 20 17:17:44.612: INFO: stdout: "" �[1mSTEP:�[0m Checking that text file contents are perfect. �[38;5;243m01/20/23 17:17:44.612�[0m Jan 20 17:17:44.612: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/36067b0b-98e4-11ed-824d-f64c9135b4ea/kubectl --server=https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=volume-6475 exec local-injector --namespace=volume-6475 -- cat /opt/0/index.html' Jan 20 17:17:45.132: INFO: stderr: "" Jan 20 17:17:45.132: INFO: stdout: "Hello from local from namespace volume-6475\n" Jan 20 17:17:45.132: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /opt/0] Namespace:volume-6475 PodName:local-injector ContainerName:local-injector Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 20 17:17:45.132: INFO: >>> kubeConfig: /root/.kube/config Jan 20 17:17:45.133: INFO: ExecWithOptions: Clientset creation Jan 20 17:17:45.133: INFO: ExecWithOptions: execute(POST https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/volume-6475/pods/local-injector/exec?command=%2Fbin%2Fsh&command=-c&command=test+-d+%2Fopt%2F0&container=local-injector&container=local-injector&stderr=true&stdout=true) Jan 20 17:17:45.419: INFO: ExecWithOptions {Command:[/bin/sh -c test -b /opt/0] Namespace:volume-6475 PodName:local-injector ContainerName:local-injector Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 20 17:17:45.419: INFO: >>> kubeConfig: /root/.kube/config Jan 20 17:17:45.420: INFO: ExecWithOptions: Clientset creation Jan 20 17:17:45.420: INFO: ExecWithOptions: execute(POST https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/volume-6475/pods/local-injector/exec?command=%2Fbin%2Fsh&command=-c&command=test+-b+%2Fopt%2F0&container=local-injector&container=local-injector&stderr=true&stdout=true) �[1mSTEP:�[0m Deleting pod local-injector in namespace volume-6475 �[38;5;243m01/20/23 17:17:45.677�[0m Jan 20 17:17:45.712: INFO: Waiting for pod local-injector to disappear Jan 20 17:17:45.744: INFO: Pod local-injector still exists Jan 20 17:17:47.745: INFO: Waiting for pod local-injector to disappear Jan 20 17:17:47.779: INFO: Pod local-injector no longer exists �[1mSTEP:�[0m starting local-client �[38;5;243m01/20/23 17:17:47.779�[0m Jan 20 17:17:47.819: INFO: Waiting up to 5m0s for pod "local-client" in namespace "volume-6475" to be "running" Jan 20 17:17:47.850: INFO: Pod "local-client": Phase="Pending", Reason="", readiness=false. Elapsed: 30.774424ms Jan 20 17:17:49.881: INFO: Pod "local-client": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062051296s Jan 20 17:17:51.882: INFO: Pod "local-client": Phase="Running", Reason="", readiness=true. Elapsed: 4.062459004s Jan 20 17:17:51.882: INFO: Pod "local-client" satisfied condition "running" �[1mSTEP:�[0m Checking that text file contents are perfect. �[38;5;243m01/20/23 17:17:51.882�[0m Jan 20 17:17:51.882: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/36067b0b-98e4-11ed-824d-f64c9135b4ea/kubectl --server=https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=volume-6475 exec local-client --namespace=volume-6475 -- cat /opt/0/index.html' Jan 20 17:17:52.371: INFO: stderr: "" Jan 20 17:17:52.371: INFO: stdout: "Hello from local from namespace volume-6475\n" Jan 20 17:17:52.371: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /opt/0] Namespace:volume-6475 PodName:local-client ContainerName:local-client Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 20 17:17:52.371: INFO: >>> kubeConfig: /root/.kube/config Jan 20 17:17:52.373: INFO: ExecWithOptions: Clientset creation Jan 20 17:17:52.373: INFO: ExecWithOptions: execute(POST https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/volume-6475/pods/local-client/exec?command=%2Fbin%2Fsh&command=-c&command=test+-d+%2Fopt%2F0&container=local-client&container=local-client&stderr=true&stdout=true) Jan 20 17:17:52.645: INFO: ExecWithOptions {Command:[/bin/sh -c test -b /opt/0] Namespace:volume-6475 PodName:local-client ContainerName:local-client Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 20 17:17:52.645: INFO: >>> kubeConfig: /root/.kube/config Jan 20 17:17:52.646: INFO: ExecWithOptions: Clientset creation Jan 20 17:17:52.646: INFO: ExecWithOptions: execute(POST https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/volume-6475/pods/local-client/exec?command=%2Fbin%2Fsh&command=-c&command=test+-b+%2Fopt%2F0&container=local-client&container=local-client&stderr=true&stdout=true) �[1mSTEP:�[0m Repeating the test on an ephemeral container (if enabled) �[38;5;243m01/20/23 17:17:52.896�[0m Jan 20 17:17:52.934: INFO: Waiting up to 5m0s for pod "local-client" in namespace "volume-6475" to be "container volume-ephemeral-container running" Jan 20 17:17:52.966: INFO: Pod "local-client": Phase="Running", Reason="", readiness=true. Elapsed: 31.977166ms Jan 20 17:17:54.998: INFO: Pod "local-client": Phase="Running", Reason="", readiness=true. Elapsed: 2.063552977s Jan 20 17:17:56.999: INFO: Pod "local-client": Phase="Running", Reason="", readiness=true. Elapsed: 4.064619218s Jan 20 17:17:58.998: INFO: Pod "local-client": Phase="Failed", Reason="Terminated", readiness=false. Elapsed: 6.06436344s Jan 20 17:18:00.998: INFO: Pod "local-client": Phase="Failed", Reason="Terminated", readiness=false. Elapsed: 8.063915619s Jan 20 17:18:02.998: INFO: Pod "local-client": Phase="Failed", Reason="Terminated", readiness=false. Elapsed: 10.064025341s Jan 20 17:18:04.998: INFO: Pod "local-client": Phase="Failed", Reason="Terminated", readiness=false. Elapsed: 12.064065941s Jan 20 17:18:06.998: INFO: Pod "local-client": Phase="Failed", Reason="Terminated", readiness=false. Elapsed: 14.063747475s Jan 20 17:18:08.998: INFO: Pod "local-client": Phase="Failed", Reason="Terminated", readiness=false. Elapsed: 16.064228565s Jan 20 17:18:10.998: INFO: Pod "local-client": Phase="Failed", Reason="Terminated", readiness=false. Elapsed: 18.064262289s Jan 20 17:18:12.998: INFO: Pod "local-client": Phase="Failed", Reason="Terminated", readiness=false. Elapsed: 20.063727533s Jan 20 17:18:14.998: INFO: Pod "local-client": Phase="Failed", Reason="Terminated", readiness=false. Elapsed: 22.064011445s Jan 20 17:18:16.998: INFO: Pod "local-client": Phase="Failed", Reason="Terminated", readiness=false. Elapsed: 24.06402761s Jan 20 17:18:18.999: INFO: Pod "local-client": Phase="Failed", Reason="Terminated", readiness=false. Elapsed: 26.064702082s Jan 20 17:18:20.998: INFO: Pod "local-client": Phase="Failed", Reason="Terminated", readiness=false. Elapsed: 28.06418241s Jan 20 17:18:22.998: INFO: Pod "local-client": Phase="Failed", Reason="Terminated", readiness=false. Elapsed: 30.063535921s Jan 20 17:18:25.026: INFO: Pod "local-client": Phase="Failed", Reason="Terminated", readiness=false. Elapsed: 32.091728655s Jan 20 17:18:26.998: INFO: Pod "local-client": Phase="Failed", Reason="Terminated", readiness=false. Elapsed: 34.064044813s Jan 20 17:18:29.021: INFO: Pod "local-client": Phase="Failed", Reason="Terminated", readiness=false. Elapsed: 36.087297424s Jan 20 17:18:31.001: INFO: Pod "local-client": Phase="Failed", Reason="Terminated", readiness=false. Elapsed: 38.067207826s Jan 20 17:18:32.998: INFO: Pod "local-client": Phase="Failed", Reason="Terminated", readiness=false. Elapsed: 40.063525853s Jan 20 17:18:35.027: INFO: Pod "local-client": Phase="Failed", Reason="Terminated", readiness=false. Elapsed: 42.092493216s Jan 20 17:18:36.999: INFO: Pod "local-client": Phase="Failed", Reason="Terminated", readiness=false. Elapsed: 44.064525642s Jan 20 17:18:38.998: INFO: Pod "local-client": Phase="Failed", Reason="Terminated", readiness=false. Elapsed: 46.063517266s Jan 20 17:18:41.008: INFO: Pod "local-client": Phase="Failed", Reason="Terminated", readiness=false. Elapsed: 48.074284482s Jan 20 17:18:42.997: INFO: Pod "local-client": Phase="Failed", Reason="Terminated", readiness=false. Elapsed: 50.062827486s Jan 20 17:18:45.000: INFO: Pod "local-client": Phase="Failed", Reason="Terminated", readiness=false. Elapsed: 52.065497303s Jan 20 17:18:46.998: INFO: Pod "local-client": Phase="Failed", Reason="Terminated", readiness=false. Elapsed: 54.063667015s Jan 20 17:18:48.998: INFO: Pod "local-client": Phase="Failed", Reason="Terminated", readiness=false. Elapsed: 56.064001947s Jan 20 17:18:50.997: INFO: Pod "local-client": Phase="Failed", Reason="Terminated", readiness=false. Elapsed: 58.063215614s Jan 20 17:18:52.998: INFO: Pod "local-client": Phase="Failed", Reason="Terminated", readiness=false. Elapsed: 1m0.064147383s Jan 20 17:18:54.998: INFO: Pod "local-client": Phase="Failed", Reason="Terminated", readiness=false. Elapsed: 1m2.063425503s Jan 20 17:18:56.999: INFO: Pod "local-client": Phase="Failed", Reason="Terminated", readiness=false. Elapsed: 1m4.065375989s Jan 20 17:18:58.998: INFO: Pod "local-client": Phase="Failed", Reason="Terminated", readiness=false. Elapsed: 1m6.06390727s Jan 20 17:19:00.998: INFO: Pod "local-client": Phase="Failed", Reason="Terminated", readiness=false. Elapsed: 1m8.064176151s Jan 20 17:19:02.997: INFO: Pod "local-client": Phase="Failed", Reason="Terminated", readiness=false. Elapsed: 1m10.063007907s Jan 20 17:19:04.997: INFO: Pod "local-client": Phase="Failed", Reason="Terminated", readiness=false. Elapsed: 1m12.063417435s Jan 20 17:19:06.998: INFO: Pod "local-client": Phase="Failed", Reason="Terminated", readiness=false. Elapsed: 1m14.063576877s Jan 20 17:19:08.998: INFO: Pod "local-client": Phase="Failed", Reason="Terminated", readiness=false. Elapsed: 1m16.063615689s Jan 20 17:19:10.998: INFO: Pod "local-client": Phase="Failed", Reason="Terminated", readiness=false. Elapsed: 1m18.064002341s Jan 20 17:19:12.997: INFO: Pod "local-client": Phase="Failed", Reason="Terminated", readiness=false. Elapsed: 1m20.063227781s Jan 20 17:19:14.998: INFO: Pod "local-client": Phase="Failed", Reason="Terminated", readiness=false. Elapsed: 1m22.064359481s Jan 20 17:19:16.998: INFO: Pod "local-client": Phase="Failed", Reason="Terminated", readiness=false. Elapsed: 1m24.064013867s Jan 20 17:19:18.997: INFO: Pod "local-client": Phase="Failed", Reason="Terminated", readiness=false. Elapsed: 1m26.06289326s Jan 20 17:19:20.997: INFO: Pod "local-client": Phase="Failed", Reason="Terminated", readiness=false. Elapsed: 1m28.063245607s Jan 20 17:19:22.997: INFO: Pod "local-client": Phase="Failed", Reason="Terminated", readiness=false. Elapsed: 1m30.062841467s Jan 20 17:19:24.998: INFO: Pod "local-client": Phase="Failed", Reason="Terminated", readiness=false. Elapsed: 1m32.063483667s Jan 20 17:19:26.997: INFO: Pod "local-client": Phase="Failed", Reason="Terminated", readiness=false. Elapsed: 1m34.063000579s Jan 20 17:19:28.998: INFO: Pod "local-client": Phase="Failed", Reason="Terminated", readiness=false. Elapsed: 1m36.064131113s Jan 20 17:19:30.997: INFO: Pod "local-client": Phase="Failed", Reason="Terminated", readiness=false. Elapsed: 1m38.063198236s Jan 20 17:19:32.999: INFO: Pod "local-client": Phase="Failed", Reason="Terminated", readiness=false. Elapsed: 1m40.06444242s Jan 20 17:19:34.998: INFO: Pod "local-client": Phase="Failed", Reason="Terminated", readiness=false. Elapsed: 1m42.063502579s Jan 20 17:19:36.997: INFO: Pod "local-client": Phase="Failed", Reason="Terminated", readiness=false. Elapsed: 1m44.063245288s Jan 20 17:19:38.998: INFO: Pod "local-client": Phase="Failed", Reason="Terminated", readiness=false. Elapsed: 1m46.063994237s Jan 20 17:19:41.001: INFO: Pod "local-client": Phase="Failed", Reason="Terminated", readiness=false. Elapsed: 1m48.066974131s Jan 20 17:19:42.998: INFO: Pod "local-client": Phase="Failed", Reason="Terminated", readiness=false. Elapsed: 1m50.063987845s Jan 20 17:19:44.998: INFO: Pod "local-client": Phase="Failed", Reason="Terminated", readiness=false. Elapsed: 1m52.063971787s Jan 20 17:19:46.998: INFO: Pod "local-client": Phase="Failed", Reason="Terminated", readiness=false. Elapsed: 1m54.063622687s Jan 20 17:19:48.997: INFO: Pod "local-client": Phase="Failed", Reason="Terminated", readiness=false. Elapsed: 1m56.063307756s Jan 20 17:19:50.999: INFO: Pod "local-client": Phase="Failed", Reason="Terminated", readiness=false. Elapsed: 1m58.064877083s Jan 20 17:19:52.997: INFO: Pod "local-client": Phase="Failed", Reason="Terminated", readiness=false. Elapsed: 2m0.063401005s Jan 20 17:19:54.998: INFO: Pod "local-client": Phase="Failed", Reason="Terminated", readiness=false. Elapsed: 2m2.063737358s Jan 20 17:19:56.998: INFO: Pod "local-client": Phase="Failed", Reason="Terminated", readiness=false. Elapsed: 2m4.064087487s Jan 20 17:19:58.998: INFO: Pod "local-client": Phase="Failed", Reason="Terminated", readiness=false. Elapsed: 2m6.063824134s Jan 20 17:20:00.998: INFO: Pod "local-client": Phase="Failed", Reason="Terminated", readiness=false. Elapsed: 2m8.063796195s Jan 20 17:20:02.997: INFO: Pod "local-client": Phase="Failed", Reason="Terminated", readiness=false. Elapsed: 2m10.063361388s Jan 20 17:20:04.998: INFO: Pod "local-client": Phase="Failed", Reason="Terminated", readiness=false. Elapsed: 2m12.063551381s Jan 20 17:20:06.998: INFO: Pod "local-client": Phase="Failed", Reason="Terminated", readiness=false. Elapsed: 2m14.064009987s Jan 20 17:20:09.008: INFO: Pod "local-client": Phase="Failed", Reason="Terminated", readiness=false. Elapsed: 2m16.07399782s Jan 20 17:20:10.998: INFO: Pod "local-client": Phase="Failed", Reason="Terminated", readiness=false. Elapsed: 2m18.06351454s Jan 20 17:20:12.998: INFO: Pod "local-client": Phase="Failed", Reason="Terminated", readiness=false. Elapsed: 2m20.064270606s Jan 20 17:20:14.998: INFO: Pod "local-client": Phase="Failed", Reason="Terminated", readiness=false. Elapsed: 2m22.063934963s Jan 20 17:20:16.998: INFO: Pod "local-client": Phase="Failed", Reason="Terminated", readiness=false. Elapsed: 2m24.063528988s Jan 20 17:20:18.999: INFO: Pod "local-client": Phase="Failed", Reason="Terminated", readiness=false. Elapsed: 2m26.065226384s Jan 20 17:20:20.997: INFO: Pod "local-client": Phase="Failed", Reason="Terminated", readiness=false. Elapsed: 2m28.063204063s Jan 20 17:20:22.997: INFO: Pod "local-client": Phase="Failed", Reason="Terminated", readiness=false. Elapsed: 2m30.063234851s Jan 20 17:20:24.998: INFO: Pod "local-client": Phase="Failed", Reason="Terminated", readiness=false. Elapsed: 2m32.063736927s Jan 20 17:20:27.003: INFO: Pod "local-client": Phase="Failed", Reason="Terminated", readiness=false. Elapsed: 2m34.068518008s Jan 20 17:20:28.999: INFO: Pod "local-client": Phase="Failed", Reason="Terminated", readiness=false. Elapsed: 2m36.064439641s Jan 20 17:20:30.998: INFO: Pod "local-client": Phase="Failed", Reason="Terminated", readiness=false. Elapsed: 2m38.063682757s Jan 20 17:20:32.998: INFO: Pod "local-client": Phase="Failed", Reason="Terminated", readiness=false. Elapsed: 2m40.064094647s Jan 20 17:20:34.998: INFO: Pod "local-client": Phase="Failed", Reason="Terminated", readiness=false. Elapsed: 2m42.0639386s Jan 20 17:20:36.998: INFO: Pod "local-client": Phase="Failed", Reason="Terminated", readiness=false. Elapsed: 2m44.063452368s Jan 20 17:20:38.998: INFO: Pod "local-client": Phase="Failed", Reason="Terminated", readiness=false. Elapsed: 2m46.063813662s Jan 20 17:20:40.998: INFO: Pod "local-client": Phase="Failed", Reason="Terminated", readiness=false. Elapsed: 2m48.063813714s Jan 20 17:20:43.025: INFO: Pod "local-client": Phase="Failed", Reason="Terminated", readiness=false. Elapsed: 2m50.090544574s Jan 20 17:20:45.010: INFO: Pod "local-client": Phase="Failed", Reason="Terminated", readiness=false. Elapsed: 2m52.075523898s Jan 20 17:20:47.004: INFO: Pod "local-client": Phase="Failed", Reason="Terminated", readiness=false. Elapsed: 2m54.0702083s Jan 20 17:20:48.999: INFO: Pod "local-client": Phase="Failed", Reason="Terminated", readiness=false. Elapsed: 2m56.065372655s Jan 20 17:20:50.999: INFO: Pod "local-client": Phase="Failed", Reason="Terminated", readiness=false. Elapsed: 2m58.064707219s Jan 20 17:20:52.998: INFO: Pod "local-client": Phase="Failed", Reason="Terminated", readiness=false. Elapsed: 3m0.063476183s Jan 20 17:20:54.998: INFO: Pod "local-client": Phase="Failed", Reason="Terminated", readiness=false. Elapsed: 3m2.064065173s Jan 20 17:20:56.998: INFO: Pod "local-client": Phase="Failed", Reason="Terminated", readiness=false. Elapsed: 3m4.064198654s Jan 20 17:20:59.002: INFO: Pod "local-client": Phase="Failed", Reason="Terminated", readiness=false. Elapsed: 3m6.06776014s Jan 20 17:21:01.002: INFO: Pod "local-client": Phase="Failed", Reason="Terminated", readiness=false. Elapsed: 3m8.067743912s Jan 20 17:21:02.998: INFO: Pod "local-client": Phase="Failed", Reason="Terminated", readiness=false. Elapsed: 3m10.064051751s Jan 20 17:21:05.000: INFO: Pod "local-client": Phase="Failed", Reason="Terminated", readiness=false. Elapsed: 3m12.066320365s Jan 20 17:21:06.997: INFO: Pod "local-client": Phase="Failed", Reason="Terminated", readiness=false. Elapsed: 3m14.06335207s Jan 20 17:21:08.999: INFO: Pod "local-client": Phase="Failed", Reason="Terminated", readiness=false. Elapsed: 3m16.064481175s Jan 20 17:21:10.997: INFO: Pod "local-client": Phase="Failed", Reason="Terminated", readiness=false. Elapsed: 3m18.06322761s Jan 20 17:21:12.998: INFO: Pod "local-client": Phase="Failed", Reason="Terminated", readiness=false. Elapsed: 3m20.063927357s Jan 20 17:21:14.997: INFO: Pod "local-client": Phase="Failed", Reason="Terminated", readiness=false. Elapsed: 3m22.063165836s Jan 20 17:21:16.998: INFO: Pod "local-client": Phase="Failed", Reason="Terminated", readiness=false. Elapsed: 3m24.064240877s Jan 20 17:21:18.998: INFO: Pod "local-client": Phase="Failed", Reason="Terminated", readiness=false. Elapsed: 3m26.064092754s Jan 20 17:21:21.000: INFO: Pod "local-client": Phase="Failed", Reason="Terminated", readiness=false. Elapsed: 3m28.066377618s Jan 20 17:21:22.998: INFO: Pod "local-client": Phase="Failed", Reason="Terminated", readiness=false. Elapsed: 3m30.064287378s Jan 20 17:21:24.998: INFO: Pod "local-client": Phase="Failed", Reason="Terminated", readiness=false. Elapsed: 3m32.063608161s Jan 20 17:21:26.998: INFO: Pod "local-client": Phase="Failed", Reason="Terminated", readiness=false. Elapsed: 3m34.063993775s Jan 20 17:21:28.999: INFO: Pod "local-client": Phase="Failed", Reason="Terminated", readiness=false. Elapsed: 3m36.06463795s Jan 20 17:21:30.998: INFO: Pod "local-client": Phase="Failed", Reason="Terminated", readiness=false. Elapsed: 3m38.06393948s Jan 20 17:21:32.999: INFO: Pod "local-client": Phase="Failed", Reason="Terminated", readiness=false. Elapsed: 3m40.064810785s Jan 20 17:21:34.998: INFO: Pod "local-client": Phase="Failed", Reason="Terminated", readiness=false. Elapsed: 3m42.063595508s Jan 20 17:21:36.998: INFO: Pod "local-client": Phase="Failed", Reason="Terminated", readiness=false. Elapsed: 3m44.063631903s Jan 20 17:21:38.999: INFO: Pod "local-client": Phase="Failed", Reason="Terminated", readiness=false. Elapsed: 3m46.065350645s Jan 20 17:21:40.999: INFO: Pod "local-client": Phase="Failed", Reason="Terminated", readiness=false. Elapsed: 3m48.064447755s Jan 20 17:21:42.998: INFO: Pod "local-client": Phase="Failed", Reason="Terminated", readiness=false. Elapsed: 3m50.063786587s Jan 20 17:21:44.998: INFO: Pod "local-client": Phase="Failed", Reason="Terminated", readiness=false. Elapsed: 3m52.064163691s Jan 20 17:21:46.998: INFO: Pod "local-client": Phase="Failed", Reason="Terminated", readiness=false. Elapsed: 3m54.064162135s Jan 20 17:21:48.998: INFO: Pod "local-client": Phase="Failed", Reason="Terminated", readiness=false. Elapsed: 3m56.063515536s Jan 20 17:21:51.001: INFO: Pod "local-client": Phase="Failed", Reason="Terminated", readiness=false. Elapsed: 3m58.066916331s Jan 20 17:21:52.999: INFO: Pod "local-client": Phase="Failed", Reason="Terminated", readiness=false. Elapsed: 4m0.065288624s Jan 20 17:21:55.008: INFO: Pod "local-client": Phase="Failed", Reason="Terminated", readiness=false. Elapsed: 4m2.07371296s Jan 20 17:21:57.002: INFO: Pod "local-client": Phase="Failed", Reason="Terminated", readiness=false. Elapsed: 4m4.068231523s Jan 20 17:21:58.997: INFO: Pod "local-client": Phase="Failed", Reason="Terminated", readiness=false. Elapsed: 4m6.063359627s Jan 20 17:22:00.998: INFO: Pod "local-client": Phase="Failed", Reason="Terminated", readiness=false. Elapsed: 4m8.064159279s Jan 20 17:22:02.998: INFO: Pod "local-client": Phase="Failed", Reason="Terminated", readiness=false. Elapsed: 4m10.063791037s Jan 20 17:22:04.997: INFO: Pod "local-client": Phase="Failed", Reason="Terminated", readiness=false. Elapsed: 4m12.063055619s Jan 20 17:22:06.998: INFO: Pod "local-client": Phase="Failed", Reason="Terminated", readiness=false. Elapsed: 4m14.06420954s Jan 20 17:22:08.998: INFO: Pod "local-client": Phase="Failed", Reason="Terminated", readiness=false. Elapsed: 4m16.063709091s Jan 20 17:22:10.998: INFO: Pod "local-client": Phase="Failed", Reason="Terminated", readiness=false. Elapsed: 4m18.063974149s Jan 20 17:22:12.998: INFO: Pod "local-client": Phase="Failed", Reason="Terminated", readiness=false. Elapsed: 4m20.064156996s Jan 20 17:22:14.998: INFO: Pod "local-client": Phase="Failed", Reason="Terminated", readiness=false. Elapsed: 4m22.063621637s Jan 20 17:22:16.998: INFO: Pod "local-client": Phase="Failed", Reason="Terminated", readiness=false. Elapsed: 4m24.063564515s Jan 20 17:22:18.998: INFO: Pod "local-client": Phase="Failed", Reason="Terminated", readiness=false. Elapsed: 4m26.064379462s Jan 20 17:22:21.001: INFO: Pod "local-client": Phase="Failed", Reason="Terminated", readiness=false. Elapsed: 4m28.067413107s Jan 20 17:22:22.998: INFO: Pod "local-client": Phase="Failed", Reason="Terminated", readiness=false. Elapsed: 4m30.064110795s Jan 20 17:22:24.998: INFO: Pod "local-client": Phase="Failed", Reason="Terminated", readiness=false. Elapsed: 4m32.063629039s Jan 20 17:22:26.998: INFO: Pod "local-client": Phase="Failed", Reason="Terminated", readiness=false. Elapsed: 4m34.064290304s Jan 20 17:22:28.998: INFO: Pod "local-client": Phase="Failed", Reason="Terminated", readiness=false. Elapsed: 4m36.063998959s Jan 20 17:22:30.998: INFO: Pod "local-client": Phase="Failed", Reason="Terminated", readiness=false. Elapsed: 4m38.063445534s Jan 20 17:22:32.998: INFO: Pod "local-client": Phase="Failed", Reason="Terminated", readiness=false. Elapsed: 4m40.06368734s Jan 20 17:22:34.998: INFO: Pod "local-client": Phase="Failed", Reason="Terminated", readiness=false. Elapsed: 4m42.063481594s Jan 20 17:22:37.002: INFO: Pod "local-client": Phase="Failed", Reason="Terminated", readiness=false. Elapsed: 4m44.068268907s Jan 20 17:22:39.000: INFO: Pod "local-client": Phase="Failed", Reason="Terminated", readiness=false. Elapsed: 4m46.066379008s Jan 20 17:22:40.999: INFO: Pod "local-client": Phase="Failed", Reason="Terminated", readiness=false. Elapsed: 4m48.065058419s Jan 20 17:22:42.998: INFO: Pod "local-client": Phase="Failed", Reason="Terminated", readiness=false. Elapsed: 4m50.063893719s Jan 20 17:22:44.998: INFO: Pod "local-client": Phase="Failed", Reason="Terminated", readiness=false. Elapsed: 4m52.064075965s Jan 20 17:22:46.998: INFO: Pod "local-client": Phase="Failed", Reason="Terminated", readiness=false. Elapsed: 4m54.064057941s Jan 20 17:22:48.999: INFO: Pod "local-client": Phase="Failed", Reason="Terminated", readiness=false. Elapsed: 4m56.064557535s Jan 20 17:22:50.998: INFO: Pod "local-client": Phase="Failed", Reason="Terminated", readiness=false. Elapsed: 4m58.064084998s Jan 20 17:22:52.998: INFO: Pod "local-client": Phase="Failed", Reason="Terminated", readiness=false. Elapsed: 5m0.063975281s Jan 20 17:22:53.029: INFO: Pod "local-client": Phase="Failed", Reason="Terminated", readiness=false. Elapsed: 5m0.095153157s Jan 20 17:22:53.031: INFO: Unexpected error: <*pod.timeoutError | 0xc004bd7440>: { msg: "timed out while waiting for pod volume-6475/local-client to be container volume-ephemeral-container running", observedObjects: [ <*v1.Pod | 0xc000b0bb00>{ TypeMeta: {Kind: "", APIVersion: ""}, ObjectMeta: { Name: "local-client", GenerateName: "", Namespace: "volume-6475", SelfLink: "", UID: "a8520c4b-0769-4650-a6c0-b7ecaf960485", ResourceVersion: "8832", Generation: 0, CreationTimestamp: { Time: { wall: 0, ext: 63809831867, loc: { name: "Local", zone: [ {name: "UTC", offset: 0, isDST: false}, ], tx: [ { when: -576460752303423488, index: 0, isstd: false, isutc: false, }, ], extend: "UTC0", cacheStart: 9223372036854775807, cacheEnd: 9223372036854775807, cacheZone: {name: "UTC", offset: 0, isDST: false}, }, }, }, DeletionTimestamp: nil, DeletionGracePeriodSeconds: nil, Labels: { "role": "local-client", }, Annotations: nil, OwnerReferences: nil, Finalizers: nil, ManagedFields: [ { Manager: "e2e.test", Operation: "Update", APIVersion: "v1", Time: { Time: { wall: 0, ext: 63809831867, loc: { name: "Local", zone: [...], tx: [...], extend: "UTC0", cacheStart: 9223372036854775807, cacheEnd: 9223372036854775807, cacheZone: {name: ..., offset: ..., isDST: ...}, }, }, }, FieldsType: "FieldsV1", FieldsV1: { Raw: "{\"f:metadata\":{\"f:labels\":{\".\":{},\"f:role\":{}}},\"f:spec\":{\"f:affinity\":{\".\":{},\"f:nodeAffinity\":{\".\":{},\"f:requiredDuringSchedulingIgnoredDuringExecution\":{}}},\"f:containers\":{\"k:{\\\"name\\\":\\\"local-client\\\"}\":{\".\":{},\"f:command\":{},\"f:image\":{},\"f:imagePullPolicy\":{},\"f:name\":{},\"f:resources\":{},\"f:securityContext\":{\".\":{},\"f:privileged\":{}},\"f:terminationMessagePath\":{},\"f:terminationMessagePolicy\":{},\"f:volumeMounts\":{\".\":{},\"k:{\\\"mountPath\\\":\\\"/opt/0\\\"}\":{\".\":{},\"f:mountPath\":{},\"f:name\":{}}},\"f:workingDir\":{}}},\"f:dnsPolicy\":{},\"f:enableServiceLinks\":{},\"f:restartPolicy\":{},\"f:schedulerName\":{},\"f:securityContext\":{\".\":{},\"f:seLinuxOptions\":{\".\":{},\"f:level\":{}}},\"f:terminationGracePeriodSeconds\":{},\"f:volumes\":{\".\":{},\"k:{\\\"name\\\":\\\"local-volume-0\\\"}\":{\".\":{},\"f:name\":{},\"f:persistentVolumeClaim\":{\".\":{},\"f:claimName\":{}}}}}}", }, Subresource: "", }, { Manager: "kubelet", Operation: "Update", APIVersion: "v1", Time: { Time: { wall: 0, ext: 63809831878, loc: { name: "Local", zone: [...], tx: [...], extend: "UTC0", cacheStart: 9223372036854775807, cacheEnd: 9223372036854775807, cacheZone: {name: ..., offset: ..., isDST: ...}, }, }, }, FieldsType: "FieldsV1", FieldsV1: { Raw: "{\"f:status\":{\"f:conditions\":{\"k:{\\\"type\\\":\\\"ContainersReady\\\"}\":{\".\":{},\"f:lastProbeTime\":{},\"f:lastTransitionTime\":{},\"f:reason\":{},\"f:status\":{},\"f:type\":{}},\"k:{\\\"type\\\":\\\"DisruptionTarget\\\"}\":{\".\":{},\"f:lastProbeTime\":{},\"f:lastTransitionTime\":{},\"f:message\":{},\"f:reason\":{},\"f:status\":{},\"f:type\":{}},\"k:{\\\"type\\\":\\\"Initialized\\\"}\":{\".\":{},\"f:lastProbeTime\":{},\"f:lastTransitionTime\":{},\"f:status\":{},\"f:type\":{}},\"k:{\\\"type\\\":\\\"Ready\\\"}\":{\".\":{},\"f:lastProbeTime\":{},\"f:lastTransitionTime\":{},\"f:reason\":{},\"f:status\":{},\"f:type\":{}}},\"f:containerStatuses\":{},\"f:hostIP\":{},\"f:message\":{},\"f:phase\":{},\"f:podIP\":{},\"f:podIPs\":{\".\":{},\"k:{\\\"ip\\\":\\\"100.96.2.60\\\"}\":{\".\":{},\"f:ip\":{}}},\"f:reason\":{},\"f:startTime\":{}}}", }, Subresource: "status", }, ], }, Spec: { Volumes: [ { Name: "local-volume-0", VolumeSource: { HostPath: nil, EmptyDir: nil, GCEPersistentDisk: nil, AWSElasticBlockStore: nil, GitRepo: nil, Secret: nil, NFS: nil, ISCSI: nil, Glusterfs: nil, PersistentVolumeClaim: { ClaimName: "pvc-k99sp", ReadOnly: false, }, RBD: nil, FlexVolume: nil, Cinder: nil, CephFS: nil, Flocker: nil, DownwardAPI: nil, FC: nil, AzureFile: nil, ConfigMap: nil, VsphereVolume: nil, Quobyte: nil, AzureDisk: nil, PhotonPersistentDisk: nil, Projected: nil, PortworxVolume: nil, ScaleIO: nil, StorageOS: nil, CSI: nil, Ephemeral: nil, }, ... Gomega truncated this representation as it exceeds 'format.MaxLength'. Consider having the object provide a custom 'GomegaStringer' representation or adjust the parameters in Gomega's 'format' package. Learn more here: https://onsi.github.io/gomega/#adjusting-output Jan 20 17:22:53.031: FAIL: timed out while waiting for pod volume-6475/local-client to be container volume-ephemeral-container running Full Stack Trace k8s.io/kubernetes/test/e2e/framework/pod.(*PodClient).AddEphemeralContainerSync(0xc001973cc8, 0xc0027d7680, 0xc0039079c0, 0x0?) test/e2e/framework/pod/pod_client.go:173 +0x63c k8s.io/kubernetes/test/e2e/framework/volume.testVolumeClient(0xc0017d1c20, {{0xc000fa9570, 0xb}, {0x75bdc31, 0x5}, {0x0, 0x0}, {0x0, 0x0, 0x0}, ...}, ...) test/e2e/framework/volume/fixtures.go:554 +0x2d6 k8s.io/kubernetes/test/e2e/framework/volume.TestVolumeClient(...) test/e2e/framework/volume/fixtures.go:523 k8s.io/kubernetes/test/e2e/storage/testsuites.(*volumesTestSuite).DefineTests.func3() test/e2e/storage/testsuites/volumes.go:187 +0x4ff �[1mSTEP:�[0m Deleting pod local-client in namespace volume-6475 �[38;5;243m01/20/23 17:22:53.031�[0m Jan 20 17:22:53.069: INFO: Waiting for pod local-client to disappear Jan 20 17:22:53.101: INFO: Pod local-client no longer exists �[1mSTEP:�[0m cleaning the environment after local �[38;5;243m01/20/23 17:22:53.101�[0m �[1mSTEP:�[0m Deleting pv and pvc �[38;5;243m01/20/23 17:22:53.101�[0m Jan 20 17:22:53.101: INFO: Deleting PersistentVolumeClaim "pvc-k99sp" Jan 20 17:22:53.135: INFO: Deleting PersistentVolume "local-2tknc" Jan 20 17:22:53.171: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount /tmp/local-driver-f8449929-1601-4c82-8fb6-bb4689577a15] Namespace:volume-6475 PodName:hostexec-i-03af3dbca738ba168-48rhp ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jan 20 17:22:53.171: INFO: >>> kubeConfig: /root/.kube/config Jan 20 17:22:53.172: INFO: ExecWithOptions: Clientset creation Jan 20 17:22:53.172: INFO: ExecWithOptions: execute(POST https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/volume-6475/pods/hostexec-i-03af3dbca738ba168-48rhp/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=umount+%2Ftmp%2Flocal-driver-f8449929-1601-4c82-8fb6-bb4689577a15&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) Jan 20 17:22:53.297: INFO: exec i-03af3dbca738ba168: command: umount /tmp/local-driver-f8449929-1601-4c82-8fb6-bb4689577a15 Jan 20 17:22:53.297: INFO: exec i-03af3dbca738ba168: stdout: "" Jan 20 17:22:53.297: INFO: exec i-03af3dbca738ba168: stderr: "" Jan 20 17:22:53.297: INFO: exec i-03af3dbca738ba168: exit code: 0 Jan 20 17:22:53.297: INFO: Unexpected error: <*errors.errorString | 0xc00192d6c0>: { s: "unable to upgrade connection: container not found (\"agnhost-container\")", } Jan 20 17:22:53.297: FAIL: unable to upgrade connection: container not found ("agnhost-container") Full Stack Trace k8s.io/kubernetes/test/e2e/storage/utils.(*ltrMgr).cleanupLocalVolumeBlockFS(0xc00346a960, 0xc004ac08c0) test/e2e/storage/utils/local.go:192 +0xb0 k8s.io/kubernetes/test/e2e/storage/utils.(*ltrMgr).Remove(0xc0010c3f68?, 0x7553220?) test/e2e/storage/utils/local.go:353 +0xc5 k8s.io/kubernetes/test/e2e/storage/drivers.(*localVolume).DeleteVolume(0x13?) test/e2e/storage/drivers/in_tree.go:1760 +0x28 k8s.io/kubernetes/test/e2e/storage/utils.TryFunc(0x8022ee8?) test/e2e/storage/utils/utils.go:748 +0x6d k8s.io/kubernetes/test/e2e/storage/framework.(*VolumeResource).CleanupResource(0xc003cf6900) test/e2e/storage/framework/volume_resource.go:236 +0xc89 k8s.io/kubernetes/test/e2e/storage/testsuites.(*volumesTestSuite).DefineTests.func2() test/e2e/storage/testsuites/volumes.go:150 +0x4a k8s.io/kubernetes/test/e2e/storage/testsuites.(*volumesTestSuite).DefineTests.func3.1() test/e2e/storage/testsuites/volumes.go:162 +0x165 panic({0x70efe60, 0xc0006a7ab0}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/test/e2e/framework.Fail({0xc000146cb0, 0x6b}, {0xc002fcd508?, 0xc000146cb0?, 0xc002fcd530?}) test/e2e/framework/log.go:61 +0x145 k8s.io/kubernetes/test/e2e/framework.ExpectNoErrorWithOffset(0x1, {0x7fa9200, 0xc004bd7440}, {0x0?, 0xc004b90be0?, 0xc?}) test/e2e/framework/expect.go:76 +0x267 k8s.io/kubernetes/test/e2e/framework.ExpectNoError(...) test/e2e/framework/expect.go:43 k8s.io/kubernetes/test/e2e/framework/pod.(*PodClient).AddEphemeralContainerSync(0xc001973cc8, 0xc0027d7680, 0xc0039079c0, 0x0?) test/e2e/framework/pod/pod_client.go:173 +0x63c k8s.io/kubernetes/test/e2e/framework/volume.testVolumeClient(0xc0017d1c20, {{0xc000fa9570, 0xb}, {0x75bdc31, 0x5}, {0x0, 0x0}, {0x0, 0x0, 0x0}, ...}, ...) test/e2e/framework/volume/fixtures.go:554 +0x2d6 k8s.io/kubernetes/test/e2e/framework/volume.TestVolumeClient(...) test/e2e/framework/volume/fixtures.go:523 k8s.io/kubernetes/test/e2e/storage/testsuites.(*volumesTestSuite).DefineTests.func3() test/e2e/storage/testsuites/volumes.go:187 +0x4ff [AfterEach] [Testpattern: Pre-provisioned PV (default fs)] volumes test/e2e/framework/node/init/init.go:32 Jan 20 17:22:53.297: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [Testpattern: Pre-provisioned PV (default fs)] volumes test/e2e/storage/drivers/in_tree.go:1734 �[1mSTEP:�[0m Deleting pod hostexec-i-03af3dbca738ba168-48rhp in namespace volume-6475 �[38;5;243m01/20/23 17:22:53.33�[0m [DeferCleanup (Each)] [Testpattern: Pre-provisioned PV (default fs)] volumes test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [Testpattern: Pre-provisioned PV (default fs)] volumes dump namespaces | framework.go:196 �[1mSTEP:�[0m dump namespace information after failure �[38;5;243m01/20/23 17:22:53.37�[0m �[1mSTEP:�[0m Collecting events from namespace "volume-6475". �[38;5;243m01/20/23 17:22:53.37�[0m �[1mSTEP:�[0m Found 18 events. �[38;5;243m01/20/23 17:22:53.401�[0m Jan 20 17:22:53.401: INFO: At 2023-01-20 17:17:12 +0000 UTC - event for hostexec-i-03af3dbca738ba168-48rhp: {default-scheduler } Scheduled: Successfully assigned volume-6475/hostexec-i-03af3dbca738ba168-48rhp to i-03af3dbca738ba168 Jan 20 17:22:53.401: INFO: At 2023-01-20 17:17:16 +0000 UTC - event for hostexec-i-03af3dbca738ba168-48rhp: {kubelet i-03af3dbca738ba168} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.43" already present on machine Jan 20 17:22:53.401: INFO: At 2023-01-20 17:17:16 +0000 UTC - event for hostexec-i-03af3dbca738ba168-48rhp: {kubelet i-03af3dbca738ba168} Created: Created container agnhost-container Jan 20 17:22:53.401: INFO: At 2023-01-20 17:17:16 +0000 UTC - event for hostexec-i-03af3dbca738ba168-48rhp: {kubelet i-03af3dbca738ba168} Started: Started container agnhost-container Jan 20 17:22:53.401: INFO: At 2023-01-20 17:17:25 +0000 UTC - event for pvc-k99sp: {persistentvolume-controller } ProvisioningFailed: storageclass.storage.k8s.io "volume-6475" not found Jan 20 17:22:53.401: INFO: At 2023-01-20 17:17:39 +0000 UTC - event for local-injector: {default-scheduler } Scheduled: Successfully assigned volume-6475/local-injector to i-03af3dbca738ba168 Jan 20 17:22:53.401: INFO: At 2023-01-20 17:17:40 +0000 UTC - event for local-injector: {kubelet i-03af3dbca738ba168} Started: Started container local-injector Jan 20 17:22:53.401: INFO: At 2023-01-20 17:17:40 +0000 UTC - event for local-injector: {kubelet i-03af3dbca738ba168} Created: Created container local-injector Jan 20 17:22:53.401: INFO: At 2023-01-20 17:17:40 +0000 UTC - event for local-injector: {kubelet i-03af3dbca738ba168} Pulled: Container image "registry.k8s.io/e2e-test-images/busybox:1.29-4" already present on machine Jan 20 17:22:53.401: INFO: At 2023-01-20 17:17:45 +0000 UTC - event for local-injector: {kubelet i-03af3dbca738ba168} Killing: Stopping container local-injector Jan 20 17:22:53.401: INFO: At 2023-01-20 17:17:47 +0000 UTC - event for local-client: {default-scheduler } Scheduled: Successfully assigned volume-6475/local-client to i-03af3dbca738ba168 Jan 20 17:22:53.401: INFO: At 2023-01-20 17:17:49 +0000 UTC - event for local-client: {kubelet i-03af3dbca738ba168} Created: Created container local-client Jan 20 17:22:53.401: INFO: At 2023-01-20 17:17:49 +0000 UTC - event for local-client: {kubelet i-03af3dbca738ba168} Started: Started container local-client Jan 20 17:22:53.401: INFO: At 2023-01-20 17:17:49 +0000 UTC - event for local-client: {kubelet i-03af3dbca738ba168} Pulled: Container image "registry.k8s.io/e2e-test-images/busybox:1.29-4" already present on machine Jan 20 17:22:53.401: INFO: At 2023-01-20 17:17:50 +0000 UTC - event for hostexec-i-03af3dbca738ba168-48rhp: {kubelet i-03af3dbca738ba168} Killing: Stopping container agnhost-container Jan 20 17:22:53.401: INFO: At 2023-01-20 17:17:52 +0000 UTC - event for local-client: {kubelet i-03af3dbca738ba168} Killing: Stopping container local-client Jan 20 17:22:53.401: INFO: At 2023-01-20 17:18:23 +0000 UTC - event for hostexec-i-03af3dbca738ba168-48rhp: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod volume-6475/hostexec-i-03af3dbca738ba168-48rhp Jan 20 17:22:53.401: INFO: At 2023-01-20 17:18:23 +0000 UTC - event for local-client: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod volume-6475/local-client Jan 20 17:22:53.434: INFO: POD NODE PHASE GRACE CONDITIONS Jan 20 17:22:53.435: INFO: Jan 20 17:22:53.467: INFO: Logging node info for node i-02cae73514916eb60 Jan 20 17:22:53.498: INFO: Node Info: &Node{ObjectMeta:{i-02cae73514916eb60 6d0a8063-275e-4cb5-a7e1-ecf07fb2d810 17317 0 2023-01-20 17:06:32 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:c5.large beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:us-east-1 failure-domain.beta.kubernetes.io/zone:us-east-1a kops.k8s.io/kops-controller-pki: kubernetes.io/arch:amd64 kubernetes.io/hostname:i-02cae73514916eb60 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node.kubernetes.io/exclude-from-external-load-balancers: node.kubernetes.io/instance-type:c5.large topology.ebs.csi.aws.com/zone:us-east-1a topology.kubernetes.io/region:us-east-1 topology.kubernetes.io/zone:us-east-1a] map[csi.volume.kubernetes.io/nodeid:{"ebs.csi.aws.com":"i-02cae73514916eb60"} flannel.alpha.coreos.com/backend-data:{"VNI":1,"VtepMAC":"e6:28:1d:38:9c:ba"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:172.20.51.65 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-20 17:06:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}}} } {kube-controller-manager Update v1 2023-01-20 17:07:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.0.0/24\"":{}}}} } {protokube Update v1 2023-01-20 17:07:01 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kops.k8s.io/kops-controller-pki":{},"f:node-role.kubernetes.io/control-plane":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}} } {aws-cloud-controller-manager Update v1 2023-01-20 17:07:06 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:beta.kubernetes.io/instance-type":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:taints":{}}} } {aws-cloud-controller-manager Update v1 2023-01-20 17:07:06 +0000 UTC FieldsV1 {"f:status":{"f:addresses":{"k:{\"type\":\"ExternalDNS\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"ExternalIP\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"Hostname\"}":{"f:address":{}},"k:{\"type\":\"InternalDNS\"}":{".":{},"f:address":{},"f:type":{}}}}} status} {flanneld Update v1 2023-01-20 17:16:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2023-01-20 17:21:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{}}},"f:status":{"f:allocatable":{"f:memory":{}},"f:capacity":{"f:memory":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{},"f:kernelVersion":{},"f:osImage":{}}}} status}]},Spec:NodeSpec{PodCIDR:100.96.0.0/24,DoNotUseExternalID:,ProviderID:aws:///us-east-1a/i-02cae73514916eb60,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/control-plane,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[100.96.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{47441653760 0} {<nil>} 46329740Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3895427072 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{42697488314 0} {<nil>} 42697488314 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3790569472 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-20 17:16:23 +0000 UTC,LastTransitionTime:2023-01-20 17:16:23 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-20 17:21:21 +0000 UTC,LastTransitionTime:2023-01-20 17:06:30 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-20 17:21:21 +0000 UTC,LastTransitionTime:2023-01-20 17:06:30 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-20 17:21:21 +0000 UTC,LastTransitionTime:2023-01-20 17:06:30 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-20 17:21:21 +0000 UTC,LastTransitionTime:2023-01-20 17:16:15 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.51.65,},NodeAddress{Type:ExternalIP,Address:100.26.139.144,},NodeAddress{Type:InternalDNS,Address:i-02cae73514916eb60.ec2.internal,},NodeAddress{Type:Hostname,Address:i-02cae73514916eb60.ec2.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-100-26-139-144.compute-1.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec26604dd0f376256ae652d6e661c235,SystemUUID:ec26604d-d0f3-7625-6ae6-52d6e661c235,BootID:a089a900-b2da-4d1d-8de1-3fdf21e97305,KernelVersion:5.15.86-flatcar,OSImage:Flatcar Container Linux by Kinvolk 3446.1.0 (Oklo),ContainerRuntimeVersion:containerd://1.6.10,KubeletVersion:v1.26.1,KubeProxyVersion:v1.26.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/etcdadm/etcd-manager@sha256:66a453db625abb268f4b3bbefc5a34a171d81e6e8796cecca54cfd71775c77c4 registry.k8s.io/etcdadm/etcd-manager:v3.0.20221209],SizeBytes:231502799,},ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.26.1],SizeBytes:135178704,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.26.1],SizeBytes:124995897,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.26.1],SizeBytes:67205316,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.26.1],SizeBytes:57661752,},ContainerImage{Names:[registry.k8s.io/kops/kops-controller:1.26.0-beta.2],SizeBytes:43191763,},ContainerImage{Names:[registry.k8s.io/kops/dns-controller:1.26.0-beta.2],SizeBytes:42821714,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:f0c5de192d832e7c1daa6580d4a62e8fa6fc8eabc0917ae4cb7ed4d15e95b59e registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.1],SizeBytes:29725845,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:122bfb8c1edabb3c0edd63f06523e6940d958d19b3957dc7b1d6f81e9f1f6119 registry.k8s.io/sig-storage/csi-provisioner:v3.1.0],SizeBytes:23345856,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:9ebbf9f023e7b41ccee3d52afe39a89e3ddacdbb69269d583abfc25847cfd9e4 registry.k8s.io/sig-storage/csi-resizer:v1.4.0],SizeBytes:22381475,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:8b9c313c05f54fb04f8d430896f5f5904b6cb157df261501b29adc04d2b2dc7b registry.k8s.io/sig-storage/csi-attacher:v3.4.0],SizeBytes:22085298,},ContainerImage{Names:[docker.io/rancher/mirrored-flannelcni-flannel@sha256:c9786f434d4663c924aeca1a2e479786d63df0d56c5d6bd62a64915f81d62ff0 docker.io/rancher/mirrored-flannelcni-flannel:v0.19.2],SizeBytes:20503771,},ContainerImage{Names:[registry.k8s.io/provider-aws/cloud-controller-manager@sha256:fdeb61e3e42ecd9cca868d550ebdb88dd6341d9e91fcfa9a37e227dab2ad22cb registry.k8s.io/provider-aws/cloud-controller-manager:v1.26.0],SizeBytes:20154862,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:8240918,},ContainerImage{Names:[registry.k8s.io/kops/kube-apiserver-healthcheck:1.26.0-beta.2],SizeBytes:4965793,},ContainerImage{Names:[docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin@sha256:28d3a6be9f450282bf42e4dad143d41da23e3d91f66f19c01ee7fd21fd17cb2b docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.0],SizeBytes:3821285,},ContainerImage{Names:[registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 20 17:22:53.498: INFO: Logging kubelet events for node i-02cae73514916eb60 Jan 20 17:22:53.533: INFO: Logging pods the kubelet thinks is on node i-02cae73514916eb60 Jan 20 17:22:53.582: INFO: etcd-manager-main-i-02cae73514916eb60 started at 2023-01-20 17:16:08 +0000 UTC (0+1 container statuses recorded) Jan 20 17:22:53.582: INFO: Container etcd-manager ready: true, restart count 1 Jan 20 17:22:53.582: INFO: kube-apiserver-i-02cae73514916eb60 started at 2023-01-20 17:16:08 +0000 UTC (0+2 container statuses recorded) Jan 20 17:22:53.582: INFO: Container healthcheck ready: true, restart count 1 Jan 20 17:22:53.582: INFO: Container kube-apiserver ready: true, restart count 2 Jan 20 17:22:53.582: INFO: kube-controller-manager-i-02cae73514916eb60 started at 2023-01-20 17:06:00 +0000 UTC (0+1 container statuses recorded) Jan 20 17:22:53.582: INFO: Container kube-controller-manager ready: true, restart count 4 Jan 20 17:22:53.582: INFO: kube-proxy-i-02cae73514916eb60 started at 2023-01-20 17:16:08 +0000 UTC (0+1 container statuses recorded) Jan 20 17:22:53.582: INFO: Container kube-proxy ready: true, restart count 1 Jan 20 17:22:53.582: INFO: dns-controller-74d4646d88-p7zxr started at 2023-01-20 17:07:01 +0000 UTC (0+1 container statuses recorded) Jan 20 17:22:53.582: INFO: Container dns-controller ready: true, restart count 1 Jan 20 17:22:53.582: INFO: ebs-csi-controller-c9fc69cf5-kn566 started at 2023-01-20 17:07:01 +0000 UTC (0+5 container statuses recorded) Jan 20 17:22:53.582: INFO: Container csi-attacher ready: true, restart count 2 Jan 20 17:22:53.582: INFO: Container csi-provisioner ready: true, restart count 2 Jan 20 17:22:53.582: INFO: Container csi-resizer ready: true, restart count 1 Jan 20 17:22:53.582: INFO: Container ebs-plugin ready: true, restart count 1 Jan 20 17:22:53.582: INFO: Container liveness-probe ready: true, restart count 1 Jan 20 17:22:53.582: INFO: aws-cloud-controller-manager-2qgs4 started at 2023-01-20 17:07:01 +0000 UTC (0+1 container statuses recorded) Jan 20 17:22:53.582: INFO: Container aws-cloud-controller-manager ready: true, restart count 2 Jan 20 17:22:53.582: INFO: etcd-manager-events-i-02cae73514916eb60 started at 2023-01-20 17:16:08 +0000 UTC (0+1 container statuses recorded) Jan 20 17:22:53.582: INFO: Container etcd-manager ready: true, restart count 1 Jan 20 17:22:53.582: INFO: kube-scheduler-i-02cae73514916eb60 started at 2023-01-20 17:16:08 +0000 UTC (0+1 container statuses recorded) Jan 20 17:22:53.582: INFO: Container kube-scheduler ready: true, restart count 1 Jan 20 17:22:53.582: INFO: ebs-csi-node-lfls8 started at 2023-01-20 17:06:58 +0000 UTC (0+3 container statuses recorded) Jan 20 17:22:53.582: INFO: Container ebs-plugin ready: true, restart count 1 Jan 20 17:22:53.582: INFO: Container liveness-probe ready: true, restart count 1 Jan 20 17:22:53.582: INFO: Container node-driver-registrar ready: true, restart count 1 Jan 20 17:22:53.582: INFO: kube-flannel-ds-5nkqq started at 2023-01-20 17:06:58 +0000 UTC (2+1 container statuses recorded) Jan 20 17:22:53.582: INFO: Init container install-cni-plugin ready: true, restart count 1 Jan 20 17:22:53.582: INFO: Init container install-cni ready: true, restart count 0 Jan 20 17:22:53.582: INFO: Container kube-flannel ready: true, restart count 1 Jan 20 17:22:53.582: INFO: kops-controller-mqtlq started at 2023-01-20 17:07:01 +0000 UTC (0+1 container statuses recorded) Jan 20 17:22:53.582: INFO: Container kops-controller ready: true, restart count 2 Jan 20 17:22:53.737: INFO: Latency metrics for node i-02cae73514916eb60 Jan 20 17:22:53.737: INFO: Logging node info for node i-03af3dbca738ba168 Jan 20 17:22:53.768: INFO: Node Info: &Node{ObjectMeta:{i-03af3dbca738ba168 f2b83166-36e9-4e14-8fe3-7e4da5f5a758 21238 0 2023-01-20 17:07:52 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:us-east-1 failure-domain.beta.kubernetes.io/zone:us-east-1a kubernetes.io/arch:amd64 kubernetes.io/hostname:i-03af3dbca738ba168 kubernetes.io/os:linux node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:us-east-1a topology.hostpath.csi/node:i-03af3dbca738ba168 topology.kubernetes.io/region:us-east-1 topology.kubernetes.io/zone:us-east-1a] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-ephemeral-5429":"i-03af3dbca738ba168","csi-hostpath-ephemeral-965":"i-03af3dbca738ba168","csi-hostpath-provisioning-6348":"i-03af3dbca738ba168","csi-hostpath-volume-4046":"i-03af3dbca738ba168","csi-mock-csi-mock-volumes-9484":"csi-mock-csi-mock-volumes-9484","ebs.csi.aws.com":"i-03af3dbca738ba168"} flannel.alpha.coreos.com/backend-data:{"VNI":1,"VtepMAC":"ea:9a:cb:28:29:d0"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:172.20.58.114 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{aws-cloud-controller-manager Update v1 2023-01-20 17:07:52 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:beta.kubernetes.io/instance-type":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {aws-cloud-controller-manager Update v1 2023-01-20 17:07:52 +0000 UTC FieldsV1 {"f:status":{"f:addresses":{"k:{\"type\":\"ExternalDNS\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"ExternalIP\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"Hostname\"}":{"f:address":{}},"k:{\"type\":\"InternalDNS\"}":{".":{},"f:address":{},"f:type":{}}}}} status} {kops-controller Update v1 2023-01-20 17:07:52 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:node-role.kubernetes.io/node":{}}}} } {kubelet Update v1 2023-01-20 17:07:52 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}}} } {kube-controller-manager Update v1 2023-01-20 17:18:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.2.0/24\"":{}}}} } {flanneld Update v1 2023-01-20 17:18:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2023-01-20 17:22:45 +0000 UTC FieldsV1 {"f:status":{"f:volumesAttached":{}}} status} {kubelet Update v1 2023-01-20 17:22:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:allocatable":{"f:memory":{}},"f:capacity":{"f:memory":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{},"f:kernelVersion":{},"f:osImage":{}},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.2.0/24,DoNotUseExternalID:,ProviderID:aws:///us-east-1a/i-03af3dbca738ba168,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{47441653760 0} {<nil>} 46329740Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4054806528 0} {<nil>} 3959772Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{42697488314 0} {<nil>} 42697488314 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3949948928 0} {<nil>} 3857372Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-20 17:18:26 +0000 UTC,LastTransitionTime:2023-01-20 17:18:26 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-20 17:22:46 +0000 UTC,LastTransitionTime:2023-01-20 17:07:42 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-20 17:22:46 +0000 UTC,LastTransitionTime:2023-01-20 17:07:42 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-20 17:22:46 +0000 UTC,LastTransitionTime:2023-01-20 17:07:42 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-20 17:22:46 +0000 UTC,LastTransitionTime:2023-01-20 17:18:21 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.58.114,},NodeAddress{Type:ExternalIP,Address:54.92.220.56,},NodeAddress{Type:InternalDNS,Address:i-03af3dbca738ba168.ec2.internal,},NodeAddress{Type:Hostname,Address:i-03af3dbca738ba168.ec2.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-54-92-220-56.compute-1.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec2a474c9a9b98f9bdaf7a97ffdf305e,SystemUUID:ec2a474c-9a9b-98f9-bdaf-7a97ffdf305e,BootID:67cb1ab9-8c0f-4a0e-aa27-d7cde3225458,KernelVersion:5.15.86-flatcar,OSImage:Flatcar Container Linux by Kinvolk 3446.1.0 (Oklo),ContainerRuntimeVersion:containerd://1.6.10,KubeletVersion:v1.26.1,KubeProxyVersion:v1.26.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c registry.k8s.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.26.1],SizeBytes:67205316,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nautilus@sha256:80ba6c8c44f9623f06e868a1aa66026c8ec438ad814f9ec95e9333b415fe3550 registry.k8s.io/e2e-test-images/nautilus:1.7],SizeBytes:49641698,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:3fe7acf013d1264ffded116b80a73dc129a449b0fccdb8d21af8279f2233f36e registry.k8s.io/e2e-test-images/httpd:2.4.39-4],SizeBytes:41901587,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:f0c5de192d832e7c1daa6580d4a62e8fa6fc8eabc0917ae4cb7ed4d15e95b59e registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.1],SizeBytes:29725845,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/sample-apiserver@sha256:8d70890151aa5d096f331cb9da1b9cd5be0412b7363fe67b5c3befdcaa2a28d0 registry.k8s.io/e2e-test-images/sample-apiserver:1.17.7],SizeBytes:25667066,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[docker.io/rancher/mirrored-flannelcni-flannel@sha256:c9786f434d4663c924aeca1a2e479786d63df0d56c5d6bd62a64915f81d62ff0 docker.io/rancher/mirrored-flannelcni-flannel:v0.19.2],SizeBytes:20503771,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf registry.k8s.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:18651485,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:8240918,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:5c99cf6a02adda929b10321dbf4ecfa00d87be9ba4fb456006237d530ab4baa1 registry.k8s.io/e2e-test-images/nginx:1.14-4],SizeBytes:6978614,},ContainerImage{Names:[docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin@sha256:28d3a6be9f450282bf42e4dad143d41da23e3d91f66f19c01ee7fd21fd17cb2b docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.0],SizeBytes:3821285,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac registry.k8s.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[kubernetes.io/csi/csi-hostpath-ephemeral-965^06499cbd-98e7-11ed-b892-ba5dd5947ff8 kubernetes.io/csi/csi-hostpath-ephemeral-965^f86b7722-98e6-11ed-b892-ba5dd5947ff8 kubernetes.io/csi/csi-hostpath-provisioning-6348^028e2af2-98e7-11ed-b885-86753d61eb34 kubernetes.io/csi/csi-hostpath-volume-4046^f0dee571-98e6-11ed-9aac-768e024aedca kubernetes.io/csi/ebs.csi.aws.com^vol-0b8c15c8fbbfc1b17 kubernetes.io/csi/ebs.csi.aws.com^vol-0f575339b789741c0],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/ebs.csi.aws.com^vol-0b8c15c8fbbfc1b17,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-6348^028e2af2-98e7-11ed-b885-86753d61eb34,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-ephemeral-965^06499cbd-98e7-11ed-b892-ba5dd5947ff8,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-volume-4046^f0dee571-98e6-11ed-9aac-768e024aedca,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-ephemeral-965^f86b7722-98e6-11ed-b892-ba5dd5947ff8,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/ebs.csi.aws.com^vol-0f575339b789741c0,DevicePath:,},},Config:nil,},} Jan 20 17:22:53.769: INFO: Logging kubelet events for node i-03af3dbca738ba168 Jan 20 17:22:53.804: INFO: Logging pods the kubelet thinks is on node i-03af3dbca738ba168 Jan 20 17:22:53.876: INFO: kube-proxy-i-03af3dbca738ba168 started at 2023-01-20 17:07:42 +0000 UTC (0+1 container statuses recorded) Jan 20 17:22:53.876: INFO: Container kube-proxy ready: true, restart count 1 Jan 20 17:22:53.876: INFO: inline-volume-tester2-jnxlt started at 2023-01-20 17:22:33 +0000 UTC (0+1 container statuses recorded) Jan 20 17:22:53.876: INFO: Container csi-volume-tester ready: false, restart count 0 Jan 20 17:22:53.876: INFO: webhook-to-be-mutated started at 2023-01-20 17:21:34 +0000 UTC (1+1 container statuses recorded) Jan 20 17:22:53.876: INFO: Init container webhook-added-init-container ready: false, restart count 0 Jan 20 17:22:53.876: INFO: Container example ready: false, restart count 0 Jan 20 17:22:53.876: INFO: coredns-559769c974-6f8t8 started at 2023-01-20 17:08:35 +0000 UTC (0+1 container statuses recorded) Jan 20 17:22:53.876: INFO: Container coredns ready: true, restart count 1 Jan 20 17:22:53.876: INFO: inline-volume-tester-2n6fx started at 2023-01-20 17:21:43 +0000 UTC (0+1 container statuses recorded) Jan 20 17:22:53.876: INFO: Container csi-volume-tester ready: false, restart count 0 Jan 20 17:22:53.876: INFO: inline-volume-tester-gxqnv started at 2023-01-20 17:22:17 +0000 UTC (0+1 container statuses recorded) Jan 20 17:22:53.876: INFO: Container csi-volume-tester ready: true, restart count 0 Jan 20 17:22:53.876: INFO: hostexec-i-03af3dbca738ba168-6dvmx started at 2023-01-20 17:22:37 +0000 UTC (0+1 container statuses recorded) Jan 20 17:22:53.876: INFO: Container agnhost-container ready: true, restart count 0 Jan 20 17:22:53.876: INFO: pod-c4e3a5d1-0ec1-4196-96f7-62c0fc6405ea started at 2023-01-20 17:22:39 +0000 UTC (0+1 container statuses recorded) Jan 20 17:22:53.876: INFO: Container write-pod ready: true, restart count 0 Jan 20 17:22:53.876: INFO: boom-server started at 2023-01-20 17:14:35 +0000 UTC (0+1 container statuses recorded) Jan 20 17:22:53.876: INFO: Container boom-server ready: false, restart count 0 Jan 20 17:22:53.876: INFO: netserver-0 started at 2023-01-20 17:15:26 +0000 UTC (0+1 container statuses recorded) Jan 20 17:22:53.876: INFO: Container webserver ready: false, restart count 0 Jan 20 17:22:53.876: INFO: hostexec-i-03af3dbca738ba168-q6k7b started at 2023-01-20 17:14:55 +0000 UTC (0+1 container statuses recorded) Jan 20 17:22:53.876: INFO: Container agnhost-container ready: false, restart count 0 Jan 20 17:22:53.876: INFO: csi-hostpathplugin-0 started at 2023-01-20 17:21:51 +0000 UTC (0+7 container statuses recorded) Jan 20 17:22:53.876: INFO: Container csi-attacher ready: true, restart count 0 Jan 20 17:22:53.876: INFO: Container csi-provisioner ready: true, restart count 0 Jan 20 17:22:53.876: INFO: Container csi-resizer ready: true, restart count 0 Jan 20 17:22:53.876: INFO: Container csi-snapshotter ready: true, restart count 0 Jan 20 17:22:53.876: INFO: Container hostpath ready: true, restart count 0 Jan 20 17:22:53.876: INFO: Container liveness-probe ready: true, restart count 0 Jan 20 17:22:53.876: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 20 17:22:53.876: INFO: hostpath-injector started at 2023-01-20 17:22:28 +0000 UTC (0+1 container statuses recorded) Jan 20 17:22:53.876: INFO: Container hostpath-injector ready: true, restart count 0 Jan 20 17:22:53.876: INFO: netserver-0 started at 2023-01-20 17:22:46 +0000 UTC (0+1 container statuses recorded) Jan 20 17:22:53.876: INFO: Container webserver ready: false, restart count 0 Jan 20 17:22:53.876: INFO: service-proxy-disabled-x6wst started at 2023-01-20 17:14:55 +0000 UTC (0+1 container statuses recorded) Jan 20 17:22:53.876: INFO: Container service-proxy-disabled ready: false, restart count 0 Jan 20 17:22:53.876: INFO: csi-hostpathplugin-0 started at 2023-01-20 17:22:21 +0000 UTC (0+7 container statuses recorded) Jan 20 17:22:53.876: INFO: Container csi-attacher ready: true, restart count 0 Jan 20 17:22:53.876: INFO: Container csi-provisioner ready: true, restart count 0 Jan 20 17:22:53.876: INFO: Container csi-resizer ready: true, restart count 0 Jan 20 17:22:53.876: INFO: Container csi-snapshotter ready: true, restart count 0 Jan 20 17:22:53.876: INFO: Container hostpath ready: true, restart count 0 Jan 20 17:22:53.876: INFO: Container liveness-probe ready: true, restart count 0 Jan 20 17:22:53.876: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 20 17:22:53.876: INFO: hostpath-client started at 2023-01-20 17:22:33 +0000 UTC (0+1 container statuses recorded) Jan 20 17:22:53.876: INFO: Container hostpath-client ready: true, restart count 0 Jan 20 17:22:53.876: INFO: hostexec-i-03af3dbca738ba168-7lbjg started at 2023-01-20 17:22:35 +0000 UTC (0+1 container statuses recorded) Jan 20 17:22:53.876: INFO: Container agnhost-container ready: true, restart count 0 Jan 20 17:22:53.876: INFO: kube-flannel-ds-6vmgt started at 2023-01-20 17:07:53 +0000 UTC (2+1 container statuses recorded) Jan 20 17:22:53.876: INFO: Init container install-cni-plugin ready: true, restart count 1 Jan 20 17:22:53.876: INFO: Init container install-cni ready: true, restart count 0 Jan 20 17:22:53.876: INFO: Container kube-flannel ready: true, restart count 2 Jan 20 17:22:53.876: INFO: csi-mockplugin-0 started at 2023-01-20 17:22:18 +0000 UTC (0+4 container statuses recorded) Jan 20 17:22:53.876: INFO: Container busybox ready: true, restart count 0 Jan 20 17:22:53.876: INFO: Container csi-provisioner ready: true, restart count 0 Jan 20 17:22:53.876: INFO: Container driver-registrar ready: true, restart count 0 Jan 20 17:22:53.876: INFO: Container mock ready: true, restart count 0 Jan 20 17:22:53.876: INFO: pvc-volume-tester-jxz2w started at 2023-01-20 17:22:47 +0000 UTC (0+1 container statuses recorded) Jan 20 17:22:53.876: INFO: Container volume-tester ready: false, restart count 0 Jan 20 17:22:53.876: INFO: hostexec-i-03af3dbca738ba168-czqgs started at 2023-01-20 17:22:26 +0000 UTC (0+1 container statuses recorded) Jan 20 17:22:53.876: INFO: Container agnhost-container ready: true, restart count 0 Jan 20 17:22:53.876: INFO: csi-hostpathplugin-0 started at 2023-01-20 17:22:04 +0000 UTC (0+7 container statuses recorded) Jan 20 17:22:53.876: INFO: Container csi-attacher ready: true, restart count 0 Jan 20 17:22:53.876: INFO: Container csi-provisioner ready: true, restart count 0 Jan 20 17:22:53.876: INFO: Container csi-resizer ready: true, restart count 0 Jan 20 17:22:53.876: INFO: Container csi-snapshotter ready: true, restart count 0 Jan 20 17:22:53.876: INFO: Container hostpath ready: true, restart count 0 Jan 20 17:22:53.876: INFO: Container liveness-probe ready: true, restart count 0 Jan 20 17:22:53.876: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 20 17:22:53.876: INFO: inline-volume-tester-wp62q started at 2023-01-20 17:22:04 +0000 UTC (0+1 container statuses recorded) Jan 20 17:22:53.876: INFO: Container csi-volume-tester ready: true, restart count 0 Jan 20 17:22:53.876: INFO: hostexec-i-03af3dbca738ba168-nxv9p started at <nil> (0+0 container statuses recorded) Jan 20 17:22:53.876: INFO: service-proxy-toggled-zghmz started at 2023-01-20 17:15:04 +0000 UTC (0+1 container statuses recorded) Jan 20 17:22:53.876: INFO: Container service-proxy-toggled ready: true, restart count 1 Jan 20 17:22:53.876: INFO: ebs-csi-node-wmgfk started at 2023-01-20 17:18:21 +0000 UTC (0+3 container statuses recorded) Jan 20 17:22:53.876: INFO: Container ebs-plugin ready: true, restart count 0 Jan 20 17:22:53.876: INFO: Container liveness-probe ready: true, restart count 0 Jan 20 17:22:53.876: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 20 17:22:53.876: INFO: csi-hostpathplugin-0 started at 2023-01-20 17:21:52 +0000 UTC (0+7 container statuses recorded) Jan 20 17:22:53.876: INFO: Container csi-attacher ready: true, restart count 0 Jan 20 17:22:53.876: INFO: Container csi-provisioner ready: true, restart count 0 Jan 20 17:22:53.876: INFO: Container csi-resizer ready: true, restart count 0 Jan 20 17:22:53.876: INFO: Container csi-snapshotter ready: true, restart count 0 Jan 20 17:22:53.876: INFO: Container hostpath ready: true, restart count 0 Jan 20 17:22:53.876: INFO: Container liveness-probe ready: true, restart count 0 Jan 20 17:22:53.876: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 20 17:22:54.288: INFO: Latency metrics for node i-03af3dbca738ba168 Jan 20 17:22:54.288: INFO: Logging node info for node i-0460dbd3e490039bb Jan 20 17:22:54.320: INFO: Node Info: &Node{ObjectMeta:{i-0460dbd3e490039bb 3ed25acd-2f33-4687-a606-3d5a944590c8 21439 0 2023-01-20 17:07:53 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:us-east-1 failure-domain.beta.kubernetes.io/zone:us-east-1a kubernetes.io/arch:amd64 kubernetes.io/hostname:i-0460dbd3e490039bb kubernetes.io/os:linux node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:us-east-1a topology.hostpath.csi/node:i-0460dbd3e490039bb topology.kubernetes.io/region:us-east-1 topology.kubernetes.io/zone:us-east-1a] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-ephemeral-9305":"i-0460dbd3e490039bb","csi-mock-csi-mock-volumes-9007":"csi-mock-csi-mock-volumes-9007","ebs.csi.aws.com":"i-0460dbd3e490039bb"} flannel.alpha.coreos.com/backend-data:{"VNI":1,"VtepMAC":"0a:dc:21:c8:4e:3e"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:172.20.44.83 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{aws-cloud-controller-manager Update v1 2023-01-20 17:07:53 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:beta.kubernetes.io/instance-type":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {aws-cloud-controller-manager Update v1 2023-01-20 17:07:53 +0000 UTC FieldsV1 {"f:status":{"f:addresses":{"k:{\"type\":\"ExternalDNS\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"ExternalIP\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"Hostname\"}":{"f:address":{}},"k:{\"type\":\"InternalDNS\"}":{".":{},"f:address":{},"f:type":{}}}}} status} {kops-controller Update v1 2023-01-20 17:07:53 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:node-role.kubernetes.io/node":{}}}} } {kubelet Update v1 2023-01-20 17:07:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}}} } {kube-controller-manager Update v1 2023-01-20 17:10:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.3.0/24\"":{}}}} } {flanneld Update v1 2023-01-20 17:11:02 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2023-01-20 17:22:37 +0000 UTC FieldsV1 {"f:status":{"f:volumesAttached":{}}} status} {kubelet Update v1 2023-01-20 17:22:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{},"f:memory":{}},"f:capacity":{"f:memory":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{},"f:kernelVersion":{},"f:osImage":{}},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.3.0/24,DoNotUseExternalID:,ProviderID:aws:///us-east-1a/i-0460dbd3e490039bb,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{47441653760 0} {<nil>} 46329740Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4054806528 0} {<nil>} 3959772Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{42697488314 0} {<nil>} 42697488314 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3949948928 0} {<nil>} 3857372Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-20 17:11:02 +0000 UTC,LastTransitionTime:2023-01-20 17:11:02 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-20 17:22:53 +0000 UTC,LastTransitionTime:2023-01-20 17:07:32 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-20 17:22:53 +0000 UTC,LastTransitionTime:2023-01-20 17:07:32 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-20 17:22:53 +0000 UTC,LastTransitionTime:2023-01-20 17:07:32 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-20 17:22:53 +0000 UTC,LastTransitionTime:2023-01-20 17:10:57 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.44.83,},NodeAddress{Type:ExternalIP,Address:3.85.92.171,},NodeAddress{Type:InternalDNS,Address:i-0460dbd3e490039bb.ec2.internal,},NodeAddress{Type:Hostname,Address:i-0460dbd3e490039bb.ec2.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-3-85-92-171.compute-1.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec214ec8f7aec9bca6997e12c5d9fa17,SystemUUID:ec214ec8-f7ae-c9bc-a699-7e12c5d9fa17,BootID:6958a09a-b123-4522-ba50-97e69196d1e0,KernelVersion:5.15.86-flatcar,OSImage:Flatcar Container Linux by Kinvolk 3446.1.0 (Oklo),ContainerRuntimeVersion:containerd://1.6.10,KubeletVersion:v1.26.1,KubeProxyVersion:v1.26.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c registry.k8s.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.26.1],SizeBytes:67205316,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:3fe7acf013d1264ffded116b80a73dc129a449b0fccdb8d21af8279f2233f36e registry.k8s.io/e2e-test-images/httpd:2.4.39-4],SizeBytes:41901587,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:f0c5de192d832e7c1daa6580d4a62e8fa6fc8eabc0917ae4cb7ed4d15e95b59e registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.1],SizeBytes:29725845,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[docker.io/rancher/mirrored-flannelcni-flannel@sha256:c9786f434d4663c924aeca1a2e479786d63df0d56c5d6bd62a64915f81d62ff0 docker.io/rancher/mirrored-flannelcni-flannel:v0.19.2],SizeBytes:20503771,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:8240918,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:5c99cf6a02adda929b10321dbf4ecfa00d87be9ba4fb456006237d530ab4baa1 registry.k8s.io/e2e-test-images/nginx:1.14-4],SizeBytes:6978614,},ContainerImage{Names:[docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin@sha256:28d3a6be9f450282bf42e4dad143d41da23e3d91f66f19c01ee7fd21fd17cb2b docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.0],SizeBytes:3821285,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:2110879,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[kubernetes.io/csi/csi-hostpath-ephemeral-9305^08104f81-98e7-11ed-9ea4-f6a152dde1de kubernetes.io/csi/csi-hostpath-ephemeral-9305^f930a76c-98e6-11ed-9ea4-f6a152dde1de kubernetes.io/csi/ebs.csi.aws.com^vol-08b1ac94b7e2a8765 kubernetes.io/csi/ebs.csi.aws.com^vol-0bfed00e78ca4b211],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/ebs.csi.aws.com^vol-0bfed00e78ca4b211,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-ephemeral-9305^f930a76c-98e6-11ed-9ea4-f6a152dde1de,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-ephemeral-9305^08104f81-98e7-11ed-9ea4-f6a152dde1de,DevicePath:,},},Config:nil,},} Jan 20 17:22:54.321: INFO: Logging kubelet events for node i-0460dbd3e490039bb Jan 20 17:22:54.363: INFO: Logging pods the kubelet thinks is on node i-0460dbd3e490039bb Jan 20 17:22:54.406: INFO: inline-volume-tester2-gxhcz started at 2023-01-20 17:22:36 +0000 UTC (0+1 container statuses recorded) Jan 20 17:22:54.406: INFO: Container csi-volume-tester ready: false, restart count 0 Jan 20 17:22:54.406: INFO: dns-test-a7d49ef2-299a-48db-83b6-0db384a7efd1 started at 2023-01-20 17:22:44 +0000 UTC (0+3 container statuses recorded) Jan 20 17:22:54.406: INFO: Container jessie-querier ready: false, restart count 0 Jan 20 17:22:54.406: INFO: Container querier ready: false, restart count 0 Jan 20 17:22:54.406: INFO: Container webserver ready: false, restart count 0 Jan 20 17:22:54.406: INFO: concurrent-27903922-b4mhl started at 2023-01-20 17:22:00 +0000 UTC (0+1 container statuses recorded) Jan 20 17:22:54.406: INFO: Container c ready: false, restart count 0 Jan 20 17:22:54.406: INFO: kube-proxy-i-0460dbd3e490039bb started at 2023-01-20 17:07:33 +0000 UTC (0+1 container statuses recorded) Jan 20 17:22:54.406: INFO: Container kube-proxy ready: true, restart count 1 Jan 20 17:22:54.406: INFO: inline-volume-tester-tmjkz started at 2023-01-20 17:22:11 +0000 UTC (0+1 container statuses recorded) Jan 20 17:22:54.406: INFO: Container csi-volume-tester ready: true, restart count 0 Jan 20 17:22:54.406: INFO: pod-d9b2c311-b86f-4135-a026-635f052e5073 started at 2023-01-20 17:15:13 +0000 UTC (0+1 container statuses recorded) Jan 20 17:22:54.406: INFO: Container write-pod ready: true, restart count 0 Jan 20 17:22:54.406: INFO: verify-service-down-host-exec-pod started at 2023-01-20 17:15:22 +0000 UTC (0+1 container statuses recorded) Jan 20 17:22:54.406: INFO: Container agnhost-container ready: true, restart count 0 Jan 20 17:22:54.406: INFO: simpletest.rc-jrszk started at 2023-01-20 17:15:26 +0000 UTC (0+1 container statuses recorded) Jan 20 17:22:54.406: INFO: Container nginx ready: true, restart count 0 Jan 20 17:22:54.406: INFO: service-proxy-toggled-bvmzm started at 2023-01-20 17:15:04 +0000 UTC (0+1 container statuses recorded) Jan 20 17:22:54.406: INFO: Container service-proxy-toggled ready: true, restart count 0 Jan 20 17:22:54.406: INFO: pvc-volume-tester-2l4s7 started at 2023-01-20 17:22:40 +0000 UTC (0+1 container statuses recorded) Jan 20 17:22:54.406: INFO: Container volume-tester ready: true, restart count 0 Jan 20 17:22:54.406: INFO: csi-mockplugin-0 started at 2023-01-20 17:21:56 +0000 UTC (0+4 container statuses recorded) Jan 20 17:22:54.406: INFO: Container busybox ready: true, restart count 0 Jan 20 17:22:54.406: INFO: Container csi-provisioner ready: true, restart count 1 Jan 20 17:22:54.406: INFO: Container driver-registrar ready: true, restart count 0 Jan 20 17:22:54.406: INFO: Container mock ready: true, restart count 0 Jan 20 17:22:54.406: INFO: hostexec-i-0460dbd3e490039bb-5q527 started at 2023-01-20 17:22:32 +0000 UTC (0+1 container statuses recorded) Jan 20 17:22:54.406: INFO: Container agnhost-container ready: true, restart count 0 Jan 20 17:22:54.406: INFO: pod-subpath-test-preprovisionedpv-jmtx started at 2023-01-20 17:22:40 +0000 UTC (1+1 container statuses recorded) Jan 20 17:22:54.406: INFO: Init container init-volume-preprovisionedpv-jmtx ready: true, restart count 0 Jan 20 17:22:54.406: INFO: Container test-container-subpath-preprovisionedpv-jmtx ready: false, restart count 0 Jan 20 17:22:54.406: INFO: hostexec-i-0460dbd3e490039bb-wsq44 started at 2023-01-20 17:22:47 +0000 UTC (0+1 container statuses recorded) Jan 20 17:22:54.406: INFO: Container agnhost-container ready: true, restart count 0 Jan 20 17:22:54.406: INFO: kube-flannel-ds-q8m2b started at 2023-01-20 17:07:53 +0000 UTC (2+1 container statuses recorded) Jan 20 17:22:54.406: INFO: Init container install-cni-plugin ready: true, restart count 1 Jan 20 17:22:54.406: INFO: Init container install-cni ready: true, restart count 0 Jan 20 17:22:54.406: INFO: Container kube-flannel ready: true, restart count 2 Jan 20 17:22:54.406: INFO: netserver-1 started at 2023-01-20 17:15:26 +0000 UTC (0+1 container statuses recorded) Jan 20 17:22:54.406: INFO: Container webserver ready: true, restart count 0 Jan 20 17:22:54.406: INFO: pod-3c544ee3-a39d-46f2-b0b2-a2be40098143 started at 2023-01-20 17:22:49 +0000 UTC (0+1 container statuses recorded) Jan 20 17:22:54.406: INFO: Container write-pod ready: false, restart count 0 Jan 20 17:22:54.406: INFO: test-pod-1 started at 2023-01-20 17:15:25 +0000 UTC (0+1 container statuses recorded) Jan 20 17:22:54.406: INFO: Container token-test ready: true, restart count 0 Jan 20 17:22:54.406: INFO: ebs-csi-node-kmj84 started at 2023-01-20 17:07:53 +0000 UTC (0+3 container statuses recorded) Jan 20 17:22:54.406: INFO: Container ebs-plugin ready: true, restart count 1 Jan 20 17:22:54.406: INFO: Container liveness-probe ready: true, restart count 1 Jan 20 17:22:54.406: INFO: Container node-driver-registrar ready: true, restart count 1 Jan 20 17:22:54.406: INFO: downwardapi-volume-65e507d7-2728-4f27-b145-837b0a794a2f started at 2023-01-20 17:15:24 +0000 UTC (0+1 container statuses recorded) Jan 20 17:22:54.406: INFO: Container client-container ready: false, restart count 0 Jan 20 17:22:54.406: INFO: csi-hostpathplugin-0 started at 2023-01-20 17:21:54 +0000 UTC (0+7 container statuses recorded) Jan 20 17:22:54.406: INFO: Container csi-attacher ready: true, restart count 0 Jan 20 17:22:54.406: INFO: Container csi-provisioner ready: true, restart count 0 Jan 20 17:22:54.406: INFO: Container csi-resizer ready: true, restart count 0 Jan 20 17:22:54.406: INFO: Container csi-snapshotter ready: true, restart count 0 Jan 20 17:22:54.406: INFO: Container hostpath ready: true, restart count 0 Jan 20 17:22:54.406: INFO: Container liveness-probe ready: true, restart count 0 Jan 20 17:22:54.406: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 20 17:22:54.406: INFO: test-grpc-46eabcb2-0c4a-4520-810d-ba498e0fcbea started at 2023-01-20 17:15:20 +0000 UTC (0+1 container statuses recorded) Jan 20 17:22:54.406: INFO: Container etcd ready: true, restart count 0 Jan 20 17:22:54.406: INFO: startup-04b7934a-c3e8-415c-ba2f-32e3d709e2f1 started at 2023-01-20 17:14:57 +0000 UTC (0+1 container statuses recorded) Jan 20 17:22:54.406: INFO: Container busybox ready: false, restart count 0 Jan 20 17:22:54.406: INFO: netserver-1 started at 2023-01-20 17:22:47 +0000 UTC (0+1 container statuses recorded) Jan 20 17:22:54.406: INFO: Container webserver ready: false, restart count 0 Jan 20 17:22:54.406: INFO: service-proxy-disabled-hc668 started at 2023-01-20 17:14:55 +0000 UTC (0+1 container statuses recorded) Jan 20 17:22:54.406: INFO: Container service-proxy-disabled ready: true, restart count 0 Jan 20 17:22:54.406: INFO: dns-test-873dc91d-f257-48f2-916d-46f60b12e695 started at 2023-01-20 17:22:41 +0000 UTC (0+3 container statuses recorded) Jan 20 17:22:54.406: INFO: Container jessie-querier ready: true, restart count 0 Jan 20 17:22:54.406: INFO: Container querier ready: true, restart count 0 Jan 20 17:22:54.406: INFO: Container webserver ready: true, restart count 0 Jan 20 17:22:54.406: INFO: hostexec-i-0460dbd3e490039bb-gl7xm started at 2023-01-20 17:15:20 +0000 UTC (0+1 container statuses recorded) Jan 20 17:22:54.406: INFO: Container agnhost-container ready: true, restart count 0 Jan 20 17:22:54.996: INFO: Latency metrics for node i-0460dbd3e490039bb Jan 20 17:22:54.996: INFO: Logging node info for node i-048afc59cd0c5fa4a Jan 20 17:22:55.027: INFO: Node Info: &Node{ObjectMeta:{i-048afc59cd0c5fa4a 906bdaca-cfdb-4619-98d1-2751663efe41 21444 0 2023-01-20 17:07:51 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:us-east-1 failure-domain.beta.kubernetes.io/zone:us-east-1a kubernetes.io/arch:amd64 kubernetes.io/hostname:i-048afc59cd0c5fa4a kubernetes.io/os:linux node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:us-east-1a topology.hostpath.csi/node:i-048afc59cd0c5fa4a topology.kubernetes.io/region:us-east-1 topology.kubernetes.io/zone:us-east-1a] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-provisioning-196":"i-048afc59cd0c5fa4a","csi-mock-csi-mock-volumes-3675":"i-048afc59cd0c5fa4a","ebs.csi.aws.com":"i-048afc59cd0c5fa4a"} flannel.alpha.coreos.com/backend-data:{"VNI":1,"VtepMAC":"52:68:72:e8:79:3f"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:172.20.41.86 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{aws-cloud-controller-manager Update v1 2023-01-20 17:07:51 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:beta.kubernetes.io/instance-type":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {aws-cloud-controller-manager Update v1 2023-01-20 17:07:51 +0000 UTC FieldsV1 {"f:status":{"f:addresses":{"k:{\"type\":\"ExternalDNS\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"ExternalIP\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"Hostname\"}":{"f:address":{}},"k:{\"type\":\"InternalDNS\"}":{".":{},"f:address":{},"f:type":{}}}}} status} {kops-controller Update v1 2023-01-20 17:07:51 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:node-role.kubernetes.io/node":{}}}} } {kubelet Update v1 2023-01-20 17:07:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}}} } {kube-controller-manager Update v1 2023-01-20 17:18:07 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.1.0/24\"":{}}}} } {flanneld Update v1 2023-01-20 17:18:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2023-01-20 17:22:52 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:allocatable":{"f:memory":{}},"f:capacity":{"f:memory":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{},"f:kernelVersion":{},"f:osImage":{}},"f:volumesInUse":{}}} status} {kube-controller-manager Update v1 2023-01-20 17:22:53 +0000 UTC FieldsV1 {"f:status":{"f:volumesAttached":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.1.0/24,DoNotUseExternalID:,ProviderID:aws:///us-east-1a/i-048afc59cd0c5fa4a,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{47441653760 0} {<nil>} 46329740Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4054806528 0} {<nil>} 3959772Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{42697488314 0} {<nil>} 42697488314 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3949948928 0} {<nil>} 3857372Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-20 17:18:12 +0000 UTC,LastTransitionTime:2023-01-20 17:18:12 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-20 17:22:52 +0000 UTC,LastTransitionTime:2023-01-20 17:07:30 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-20 17:22:52 +0000 UTC,LastTransitionTime:2023-01-20 17:07:30 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-20 17:22:52 +0000 UTC,LastTransitionTime:2023-01-20 17:07:30 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-20 17:22:52 +0000 UTC,LastTransitionTime:2023-01-20 17:18:06 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.41.86,},NodeAddress{Type:ExternalIP,Address:34.201.135.194,},NodeAddress{Type:InternalDNS,Address:i-048afc59cd0c5fa4a.ec2.internal,},NodeAddress{Type:Hostname,Address:i-048afc59cd0c5fa4a.ec2.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-34-201-135-194.compute-1.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec2886bb32c49932d355813f2015452a,SystemUUID:ec2886bb-32c4-9932-d355-813f2015452a,BootID:c3c6217a-92a9-4cf1-a92f-5cf2a5908c35,KernelVersion:5.15.86-flatcar,OSImage:Flatcar Container Linux by Kinvolk 3446.1.0 (Oklo),ContainerRuntimeVersion:containerd://1.6.10,KubeletVersion:v1.26.1,KubeProxyVersion:v1.26.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.26.1],SizeBytes:67205316,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:f0c5de192d832e7c1daa6580d4a62e8fa6fc8eabc0917ae4cb7ed4d15e95b59e registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.1],SizeBytes:29725845,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[docker.io/rancher/mirrored-flannelcni-flannel@sha256:c9786f434d4663c924aeca1a2e479786d63df0d56c5d6bd62a64915f81d62ff0 docker.io/rancher/mirrored-flannelcni-flannel:v0.19.2],SizeBytes:20503771,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:68d396900aeaa072c1f27289485fdac29834045a6f3ffe369bf389d830ef572d registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.6],SizeBytes:20293261,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:8240918,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:5c99cf6a02adda929b10321dbf4ecfa00d87be9ba4fb456006237d530ab4baa1 registry.k8s.io/e2e-test-images/nginx:1.14-4],SizeBytes:6978614,},ContainerImage{Names:[docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin@sha256:28d3a6be9f450282bf42e4dad143d41da23e3d91f66f19c01ee7fd21fd17cb2b docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.0],SizeBytes:3821285,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[kubernetes.io/csi/csi-mock-csi-mock-volumes-3675^030ea248-98e7-11ed-9b1e-da37649e7922],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-mock-csi-mock-volumes-3675^030ea248-98e7-11ed-9b1e-da37649e7922,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-196^117332c3-98e7-11ed-afd7-6e340c83b39e,DevicePath:,},},Config:nil,},} Jan 20 17:22:55.028: INFO: Logging kubelet events for node i-048afc59cd0c5fa4a Jan 20 17:22:55.063: INFO: Logging pods the kubelet thinks is on node i-048afc59cd0c5fa4a Jan 20 17:22:55.112: INFO: startup-script started at 2023-01-20 17:14:41 +0000 UTC (0+1 container statuses recorded) Jan 20 17:22:55.112: INFO: Container startup-script ready: false, restart count 0 Jan 20 17:22:55.112: INFO: rs-txj5c started at 2023-01-20 17:21:18 +0000 UTC (0+1 container statuses recorded) Jan 20 17:22:55.112: INFO: Container donothing ready: false, restart count 0 Jan 20 17:22:55.112: INFO: csi-mockplugin-0 started at 2023-01-20 17:22:22 +0000 UTC (0+3 container statuses recorded) Jan 20 17:22:55.112: INFO: Container csi-provisioner ready: true, restart count 0 Jan 20 17:22:55.112: INFO: Container driver-registrar ready: true, restart count 0 Jan 20 17:22:55.112: INFO: Container mock ready: true, restart count 0 Jan 20 17:22:55.112: INFO: csi-mockplugin-resizer-0 started at 2023-01-20 17:22:22 +0000 UTC (0+1 container statuses recorded) Jan 20 17:22:55.112: INFO: Container csi-resizer ready: true, restart count 0 Jan 20 17:22:55.112: INFO: pvc-volume-tester-gtbjb started at 2023-01-20 17:22:43 +0000 UTC (0+1 container statuses recorded) Jan 20 17:22:55.112: INFO: Container volume-tester ready: true, restart count 0 Jan 20 17:22:55.112: INFO: coredns-autoscaler-7cb5c5b969-kxr22 started at 2023-01-20 17:07:54 +0000 UTC (0+1 container statuses recorded) Jan 20 17:22:55.112: INFO: Container autoscaler ready: false, restart count 0 Jan 20 17:22:55.112: INFO: ebs-csi-node-dkvln started at 2023-01-20 17:18:06 +0000 UTC (0+3 container statuses recorded) Jan 20 17:22:55.112: INFO: Container ebs-plugin ready: true, restart count 0 Jan 20 17:22:55.112: INFO: Container liveness-probe ready: true, restart count 0 Jan 20 17:22:55.112: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 20 17:22:55.112: INFO: kube-proxy-i-048afc59cd0c5fa4a started at 2023-01-20 17:07:31 +0000 UTC (0+1 container statuses recorded) Jan 20 17:22:55.112: INFO: Container kube-proxy ready: true, restart count 1 Jan 20 17:22:55.112: INFO: coredns-559769c974-mkzlp started at 2023-01-20 17:07:54 +0000 UTC (0+1 container statuses recorded) Jan 20 17:22:55.112: INFO: Container coredns ready: true, restart count 1 Jan 20 17:22:55.112: INFO: hostpath-client started at 2023-01-20 17:22:52 +0000 UTC (0+1 container statuses recorded) Jan 20 17:22:55.112: INFO: Container hostpath-client ready: false, restart count 0 Jan 20 17:22:55.112: INFO: netserver-2 started at 2023-01-20 17:15:26 +0000 UTC (0+1 container statuses recorded) Jan 20 17:22:55.112: INFO: Container webserver ready: false, restart count 0 Jan 20 17:22:55.112: INFO: netserver-2 started at 2023-01-20 17:22:47 +0000 UTC (0+1 container statuses recorded) Jan 20 17:22:55.112: INFO: Container webserver ready: false, restart count 0 Jan 20 17:22:55.112: INFO: csi-hostpathplugin-0 started at 2023-01-20 17:22:03 +0000 UTC (0+7 container statuses recorded) Jan 20 17:22:55.112: INFO: Container csi-attacher ready: true, restart count 0 Jan 20 17:22:55.112: INFO: Container csi-provisioner ready: true, restart count 0 Jan 20 17:22:55.112: INFO: Container csi-resizer ready: true, restart count 0 Jan 20 17:22:55.112: INFO: Container csi-snapshotter ready: true, restart count 0 Jan 20 17:22:55.112: INFO: Container hostpath ready: true, restart count 0 Jan 20 17:22:55.112: INFO: Container liveness-probe ready: true, restart count 0 Jan 20 17:22:55.112: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 20 17:22:55.112: INFO: kube-flannel-ds-nlnn2 started at 2023-01-20 17:18:06 +0000 UTC (2+1 container statuses recorded) Jan 20 17:22:55.112: INFO: Init container install-cni-plugin ready: true, restart count 0 Jan 20 17:22:55.112: INFO: Init container install-cni ready: true, restart count 0 Jan 20 17:22:55.112: INFO: Container kube-flannel ready: true, restart count 0 Jan 20 17:22:55.112: INFO: csi-mockplugin-attacher-0 started at 2023-01-20 17:22:22 +0000 UTC (0+1 container statuses recorded) Jan 20 17:22:55.112: INFO: Container csi-attacher ready: true, restart count 0 Jan 20 17:22:55.393: INFO: Latency metrics for node i-048afc59cd0c5fa4a Jan 20 17:22:55.393: INFO: Logging node info for node i-0f775d321e19704c3 Jan 20 17:22:55.424: INFO: Node Info: &Node{ObjectMeta:{i-0f775d321e19704c3 19607256-f185-404f-84dd-0198c716bca7 21315 0 2023-01-20 17:07:53 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:us-east-1 failure-domain.beta.kubernetes.io/zone:us-east-1a kubernetes.io/arch:amd64 kubernetes.io/hostname:i-0f775d321e19704c3 kubernetes.io/os:linux node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:us-east-1a topology.hostpath.csi/node:i-0f775d321e19704c3 topology.kubernetes.io/region:us-east-1 topology.kubernetes.io/zone:us-east-1a] map[csi.volume.kubernetes.io/nodeid:{"ebs.csi.aws.com":"i-0f775d321e19704c3"} flannel.alpha.coreos.com/backend-data:{"VNI":1,"VtepMAC":"72:43:d6:40:e8:77"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:172.20.55.61 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-20 17:07:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}}} } {aws-cloud-controller-manager Update v1 2023-01-20 17:07:54 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:beta.kubernetes.io/instance-type":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {aws-cloud-controller-manager Update v1 2023-01-20 17:07:54 +0000 UTC FieldsV1 {"f:status":{"f:addresses":{"k:{\"type\":\"ExternalDNS\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"ExternalIP\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"Hostname\"}":{"f:address":{}},"k:{\"type\":\"InternalDNS\"}":{".":{},"f:address":{},"f:type":{}}}}} status} {kops-controller Update v1 2023-01-20 17:07:54 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:node-role.kubernetes.io/node":{}}}} } {kube-controller-manager Update v1 2023-01-20 17:09:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.4.0/24\"":{}}}} } {flanneld Update v1 2023-01-20 17:09:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2023-01-20 17:22:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{},"f:memory":{}},"f:capacity":{"f:memory":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{},"f:kernelVersion":{},"f:osImage":{}},"f:volumesInUse":{}}} status} {kube-controller-manager Update v1 2023-01-20 17:22:47 +0000 UTC FieldsV1 {"f:status":{"f:volumesAttached":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.4.0/24,DoNotUseExternalID:,ProviderID:aws:///us-east-1a/i-0f775d321e19704c3,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{47441653760 0} {<nil>} 46329740Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4054814720 0} {<nil>} 3959780Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{42697488314 0} {<nil>} 42697488314 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3949957120 0} {<nil>} 3857380Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-20 17:09:35 +0000 UTC,LastTransitionTime:2023-01-20 17:09:35 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-20 17:22:45 +0000 UTC,LastTransitionTime:2023-01-20 17:07:33 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-20 17:22:45 +0000 UTC,LastTransitionTime:2023-01-20 17:07:33 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-20 17:22:45 +0000 UTC,LastTransitionTime:2023-01-20 17:07:33 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-20 17:22:45 +0000 UTC,LastTransitionTime:2023-01-20 17:09:30 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.55.61,},NodeAddress{Type:ExternalIP,Address:3.93.201.229,},NodeAddress{Type:InternalDNS,Address:i-0f775d321e19704c3.ec2.internal,},NodeAddress{Type:Hostname,Address:i-0f775d321e19704c3.ec2.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-3-93-201-229.compute-1.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec2a4be20ed59f70fa8678b6d03004b4,SystemUUID:ec2a4be2-0ed5-9f70-fa86-78b6d03004b4,BootID:d3100caa-b833-4d03-b5c0-4cb4a8b87060,KernelVersion:5.15.86-flatcar,OSImage:Flatcar Container Linux by Kinvolk 3446.1.0 (Oklo),ContainerRuntimeVersion:containerd://1.6.10,KubeletVersion:v1.26.1,KubeProxyVersion:v1.26.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.26.1],SizeBytes:67205316,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nautilus@sha256:80ba6c8c44f9623f06e868a1aa66026c8ec438ad814f9ec95e9333b415fe3550 registry.k8s.io/e2e-test-images/nautilus:1.7],SizeBytes:49641698,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:3fe7acf013d1264ffded116b80a73dc129a449b0fccdb8d21af8279f2233f36e registry.k8s.io/e2e-test-images/httpd:2.4.39-4],SizeBytes:41901587,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:f0c5de192d832e7c1daa6580d4a62e8fa6fc8eabc0917ae4cb7ed4d15e95b59e registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.1],SizeBytes:29725845,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0],SizeBytes:23881995,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[docker.io/rancher/mirrored-flannelcni-flannel@sha256:c9786f434d4663c924aeca1a2e479786d63df0d56c5d6bd62a64915f81d62ff0 docker.io/rancher/mirrored-flannelcni-flannel:v0.19.2],SizeBytes:20503771,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:68d396900aeaa072c1f27289485fdac29834045a6f3ffe369bf389d830ef572d registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.6],SizeBytes:20293261,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nonroot@sha256:ee9f50b3c64b174d296d91ca9f69a914ac30e59095dfb462b2b518ad28a63655 registry.k8s.io/e2e-test-images/nonroot:1.4],SizeBytes:17747885,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:933940f13b3ea0abc62e656c1aa5c5b47c04b15d71250413a6b821bd0c58b94e registry.k8s.io/sig-storage/livenessprobe:v2.7.0],SizeBytes:8688564,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:8240918,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:db048754ae68ae337d8fa96494c96d2a1204c3320f5dcf7e8e71085adec85da6 registry.k8s.io/e2e-test-images/nginx:1.15-4],SizeBytes:7000082,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:5c99cf6a02adda929b10321dbf4ecfa00d87be9ba4fb456006237d530ab4baa1 registry.k8s.io/e2e-test-images/nginx:1.14-4],SizeBytes:6978614,},ContainerImage{Names:[docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin@sha256:28d3a6be9f450282bf42e4dad143d41da23e3d91f66f19c01ee7fd21fd17cb2b docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.0],SizeBytes:3821285,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac registry.k8s.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[kubernetes.io/csi/csi-mock-csi-mock-volumes-5223^c8620916-98e5-11ed-99ff-464f19649f33],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-mock-csi-mock-volumes-5223^c8620916-98e5-11ed-99ff-464f19649f33,DevicePath:,},},Config:nil,},} Jan 20 17:22:55.425: INFO: Logging kubelet events for node i-0f775d321e19704c3 Jan 20 17:22:55.460: INFO: Logging pods the kubelet thinks is on node i-0f775d321e19704c3 Jan 20 17:22:55.498: INFO: pvc-volume-tester-v7khp started at 2023-01-20 17:13:41 +0000 UTC (0+1 container statuses recorded) Jan 20 17:22:55.498: INFO: Container volume-tester ready: false, restart count 0 Jan 20 17:22:55.498: INFO: test-pod-3 started at 2023-01-20 17:15:25 +0000 UTC (0+1 container statuses recorded) Jan 20 17:22:55.498: INFO: Container token-test ready: true, restart count 0 Jan 20 17:22:55.498: INFO: coredns-autoscaler-7cb5c5b969-zvbqv started at 2023-01-20 17:17:40 +0000 UTC (0+1 container statuses recorded) Jan 20 17:22:55.498: INFO: Container autoscaler ready: true, restart count 0 Jan 20 17:22:55.498: INFO: rs-5nr7j started at 2023-01-20 17:22:33 +0000 UTC (0+1 container statuses recorded) Jan 20 17:22:55.498: INFO: Container donothing ready: false, restart count 0 Jan 20 17:22:55.498: INFO: kube-proxy-i-0f775d321e19704c3 started at 2023-01-20 17:07:34 +0000 UTC (0+1 container statuses recorded) Jan 20 17:22:55.498: INFO: Container kube-proxy ready: true, restart count 1 Jan 20 17:22:55.498: INFO: test-pod-2 started at 2023-01-20 17:15:25 +0000 UTC (0+1 container statuses recorded) Jan 20 17:22:55.498: INFO: Container token-test ready: true, restart count 0 Jan 20 17:22:55.498: INFO: netserver-3 started at 2023-01-20 17:15:26 +0000 UTC (0+1 container statuses recorded) Jan 20 17:22:55.498: INFO: Container webserver ready: true, restart count 0 Jan 20 17:22:55.498: INFO: liveness-e59c9a22-7185-448b-883a-718c1d8f0e69 started at 2023-01-20 17:22:37 +0000 UTC (0+1 container statuses recorded) Jan 20 17:22:55.498: INFO: Container agnhost-container ready: true, restart count 0 Jan 20 17:22:55.498: INFO: simpletest.rc-9xd2k started at 2023-01-20 17:15:26 +0000 UTC (0+1 container statuses recorded) Jan 20 17:22:55.498: INFO: Container nginx ready: true, restart count 0 Jan 20 17:22:55.498: INFO: ebs-csi-node-74dsh started at 2023-01-20 17:07:54 +0000 UTC (0+3 container statuses recorded) Jan 20 17:22:55.498: INFO: Container ebs-plugin ready: true, restart count 1 Jan 20 17:22:55.498: INFO: Container liveness-probe ready: true, restart count 1 Jan 20 17:22:55.498: INFO: Container node-driver-registrar ready: true, restart count 1 Jan 20 17:22:55.498: INFO: test-new-deployment-7f5969cbc7-v7dd7 started at 2023-01-20 17:22:55 +0000 UTC (0+1 container statuses recorded) Jan 20 17:22:55.498: INFO: Container httpd ready: false, restart count 0 Jan 20 17:22:55.498: INFO: service-proxy-disabled-jg82r started at 2023-01-20 17:17:52 +0000 UTC (0+1 container statuses recorded) Jan 20 17:22:55.498: INFO: Container service-proxy-disabled ready: true, restart count 0 Jan 20 17:22:55.498: INFO: kube-flannel-ds-d9rm4 started at 2023-01-20 17:07:54 +0000 UTC (2+1 container statuses recorded) Jan 20 17:22:55.498: INFO: Init container install-cni-plugin ready: true, restart count 1 Jan 20 17:22:55.498: INFO: Init container install-cni ready: true, restart count 0 Jan 20 17:22:55.498: INFO: Container kube-flannel ready: true, restart count 2 Jan 20 17:22:55.498: INFO: service-proxy-toggled-8j48l started at 2023-01-20 17:15:04 +0000 UTC (0+1 container statuses recorded) Jan 20 17:22:55.498: INFO: Container service-proxy-toggled ready: true, restart count 0 Jan 20 17:22:55.498: INFO: netserver-3 started at 2023-01-20 17:22:47 +0000 UTC (0+1 container statuses recorded) Jan 20 17:22:55.498: INFO: Container webserver ready: false, restart count 0 Jan 20 17:22:55.498: INFO: service-proxy-disabled-xwb98 started at 2023-01-20 17:14:55 +0000 UTC (0+1 container statuses recorded) Jan 20 17:22:55.498: INFO: Container service-proxy-disabled ready: true, restart count 0 Jan 20 17:22:55.498: INFO: hostpath-symlink-prep-provisioning-2550 started at 2023-01-20 17:22:52 +0000 UTC (0+1 container statuses recorded) Jan 20 17:22:55.498: INFO: Container init-volume-provisioning-2550 ready: false, restart count 0 Jan 20 17:22:55.756: INFO: Latency metrics for node i-0f775d321e19704c3 [DeferCleanup (Each)] [Testpattern: Pre-provisioned PV (default fs)] volumes tear down framework | framework.go:193 �[1mSTEP:�[0m Destroying namespace "volume-6475" for this suite. �[38;5;243m01/20/23 17:22:55.756�[0m
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-storage\]\sIn\-tree\sVolumes\s\[Driver\:\slocal\]\[LocalVolumeType\:\sblockfs\]\s\[Testpattern\:\sPre\-provisioned\sPV\s\(filesystem\svolmode\)\]\svolumeMode\sshould\snot\smount\s\/\smap\sunused\svolumes\sin\sa\spod\s\[LinuxOnly\]$'
test/e2e/storage/utils/local.go:162 k8s.io/kubernetes/test/e2e/storage/utils.(*ltrMgr).teardownLoopDevice(0xc002e9f3e0, {0xc0027d8a40, 0x36}, 0xc0019a6800) test/e2e/storage/utils/local.go:162 +0x176 k8s.io/kubernetes/test/e2e/storage/utils.(*ltrMgr).cleanupLocalVolumeBlock(0xc002e9f3e0, 0xc000fd6c40) test/e2e/storage/utils/local.go:167 +0x36 k8s.io/kubernetes/test/e2e/storage/utils.(*ltrMgr).cleanupLocalVolumeBlockFS(0xc002e9f3e0, 0xc000fd6c40) test/e2e/storage/utils/local.go:193 +0xbf k8s.io/kubernetes/test/e2e/storage/utils.(*ltrMgr).Remove(0xc0013912c0?, 0x7553220?) test/e2e/storage/utils/local.go:353 +0xc5 k8s.io/kubernetes/test/e2e/storage/drivers.(*localVolume).DeleteVolume(0x13?) test/e2e/storage/drivers/in_tree.go:1760 +0x28 k8s.io/kubernetes/test/e2e/storage/utils.TryFunc(0x8022ee8?) test/e2e/storage/utils/utils.go:748 +0x6d k8s.io/kubernetes/test/e2e/storage/framework.(*VolumeResource).CleanupResource(0xc001564f28) test/e2e/storage/framework/volume_resource.go:236 +0xc89 k8s.io/kubernetes/test/e2e/storage/testsuites.(*volumeModeTestSuite).DefineTests.func3() test/e2e/storage/testsuites/volumemode.go:187 +0x3e k8s.io/kubernetes/test/e2e/storage/testsuites.(*volumeModeTestSuite).DefineTests.func7() test/e2e/storage/testsuites/volumemode.go:416 +0x9fd
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode set up framework | framework.go:178 �[1mSTEP:�[0m Creating a kubernetes client �[38;5;243m01/20/23 17:17:10.983�[0m Jan 20 17:17:10.983: INFO: >>> kubeConfig: /root/.kube/config �[1mSTEP:�[0m Building a namespace api object, basename volumemode �[38;5;243m01/20/23 17:17:10.984�[0m �[1mSTEP:�[0m Waiting for a default service account to be provisioned in namespace �[38;5;243m01/20/23 17:17:11.081�[0m �[1mSTEP:�[0m Waiting for kube-root-ca.crt to be provisioned in namespace �[38;5;243m01/20/23 17:17:11.142�[0m [BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode test/e2e/framework/metrics/init/init.go:31 [It] should not mount / map unused volumes in a pod [LinuxOnly] test/e2e/storage/testsuites/volumemode.go:354 Jan 20 17:17:11.242: INFO: In-tree plugin kubernetes.io/local-volume is not migrated, not validating any metrics �[1mSTEP:�[0m Creating block device on node "i-03af3dbca738ba168" using path "/tmp/local-driver-2228bd84-dd08-430a-bd88-295e27e48850" �[38;5;243m01/20/23 17:17:11.242�[0m Jan 20 17:17:11.297: INFO: Waiting up to 5m0s for pod "hostexec-i-03af3dbca738ba168-j6jt4" in namespace "volumemode-9142" to be "running" Jan 20 17:17:11.329: INFO: Pod "hostexec-i-03af3dbca738ba168-j6jt4": Phase="Pending", Reason="", readiness=false. Elapsed: 32.263474ms Jan 20 17:17:13.362: INFO: Pod "hostexec-i-03af3dbca738ba168-j6jt4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065515797s Jan 20 17:17:15.361: INFO: Pod "hostexec-i-03af3dbca738ba168-j6jt4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.064035344s Jan 20 17:17:17.361: INFO: Pod "hostexec-i-03af3dbca738ba168-j6jt4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.064486744s Jan 20 17:17:19.361: INFO: Pod "hostexec-i-03af3dbca738ba168-j6jt4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.064518122s Jan 20 17:17:21.361: INFO: Pod "hostexec-i-03af3dbca738ba168-j6jt4": Phase="Pending", Reason="", readiness=false. Elapsed: 10.064358752s Jan 20 17:17:23.360: INFO: Pod "hostexec-i-03af3dbca738ba168-j6jt4": Phase="Pending", Reason="", readiness=false. Elapsed: 12.063561736s Jan 20 17:17:25.361: INFO: Pod "hostexec-i-03af3dbca738ba168-j6jt4": Phase="Pending", Reason="", readiness=false. Elapsed: 14.064201598s Jan 20 17:17:27.360: INFO: Pod "hostexec-i-03af3dbca738ba168-j6jt4": Phase="Running", Reason="", readiness=true. Elapsed: 16.063818798s Jan 20 17:17:27.360: INFO: Pod "hostexec-i-03af3dbca738ba168-j6jt4" satisfied condition "running" Jan 20 17:17:27.360: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-driver-2228bd84-dd08-430a-bd88-295e27e48850 && dd if=/dev/zero of=/tmp/local-driver-2228bd84-dd08-430a-bd88-295e27e48850/file bs=4096 count=5120 && losetup -f /tmp/local-driver-2228bd84-dd08-430a-bd88-295e27e48850/file] Namespace:volumemode-9142 PodName:hostexec-i-03af3dbca738ba168-j6jt4 ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jan 20 17:17:27.360: INFO: >>> kubeConfig: /root/.kube/config Jan 20 17:17:27.362: INFO: ExecWithOptions: Clientset creation Jan 20 17:17:27.362: INFO: ExecWithOptions: execute(POST https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/volumemode-9142/pods/hostexec-i-03af3dbca738ba168-j6jt4/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=mkdir+-p+%2Ftmp%2Flocal-driver-2228bd84-dd08-430a-bd88-295e27e48850+%26%26+dd+if%3D%2Fdev%2Fzero+of%3D%2Ftmp%2Flocal-driver-2228bd84-dd08-430a-bd88-295e27e48850%2Ffile+bs%3D4096+count%3D5120+%26%26+losetup+-f+%2Ftmp%2Flocal-driver-2228bd84-dd08-430a-bd88-295e27e48850%2Ffile&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) Jan 20 17:17:27.654: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-driver-2228bd84-dd08-430a-bd88-295e27e48850/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:volumemode-9142 PodName:hostexec-i-03af3dbca738ba168-j6jt4 ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jan 20 17:17:27.654: INFO: >>> kubeConfig: /root/.kube/config Jan 20 17:17:27.655: INFO: ExecWithOptions: Clientset creation Jan 20 17:17:27.655: INFO: ExecWithOptions: execute(POST https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/volumemode-9142/pods/hostexec-i-03af3dbca738ba168-j6jt4/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=E2E_LOOP_DEV%3D%24%28losetup+%7C+grep+%2Ftmp%2Flocal-driver-2228bd84-dd08-430a-bd88-295e27e48850%2Ffile+%7C+awk+%27%7B+print+%241+%7D%27%29+2%3E%261+%3E+%2Fdev%2Fnull+%26%26+echo+%24%7BE2E_LOOP_DEV%7D&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) Jan 20 17:17:27.923: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkfs -t ext4 /dev/loop3 && mount -t ext4 /dev/loop3 /tmp/local-driver-2228bd84-dd08-430a-bd88-295e27e48850 && chmod o+rwx /tmp/local-driver-2228bd84-dd08-430a-bd88-295e27e48850] Namespace:volumemode-9142 PodName:hostexec-i-03af3dbca738ba168-j6jt4 ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jan 20 17:17:27.923: INFO: >>> kubeConfig: /root/.kube/config Jan 20 17:17:27.924: INFO: ExecWithOptions: Clientset creation Jan 20 17:17:27.925: INFO: ExecWithOptions: execute(POST https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/volumemode-9142/pods/hostexec-i-03af3dbca738ba168-j6jt4/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=mkfs+-t+ext4+%2Fdev%2Floop3+%26%26+mount+-t+ext4+%2Fdev%2Floop3+%2Ftmp%2Flocal-driver-2228bd84-dd08-430a-bd88-295e27e48850+%26%26+chmod+o%2Brwx+%2Ftmp%2Flocal-driver-2228bd84-dd08-430a-bd88-295e27e48850&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) Jan 20 17:17:28.287: INFO: Creating resource for pre-provisioned PV Jan 20 17:17:28.287: INFO: Creating PVC and PV �[1mSTEP:�[0m Creating a PVC followed by a PV �[38;5;243m01/20/23 17:17:28.287�[0m Jan 20 17:17:28.352: INFO: Waiting for PV local-tt5jh to bind to PVC pvc-94cqf Jan 20 17:17:28.352: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-94cqf] to have phase Bound Jan 20 17:17:28.382: INFO: PersistentVolumeClaim pvc-94cqf found but phase is Pending instead of Bound. Jan 20 17:17:30.413: INFO: PersistentVolumeClaim pvc-94cqf found but phase is Pending instead of Bound. Jan 20 17:17:32.445: INFO: PersistentVolumeClaim pvc-94cqf found but phase is Pending instead of Bound. Jan 20 17:17:34.476: INFO: PersistentVolumeClaim pvc-94cqf found but phase is Pending instead of Bound. Jan 20 17:17:36.507: INFO: PersistentVolumeClaim pvc-94cqf found but phase is Pending instead of Bound. Jan 20 17:17:38.540: INFO: PersistentVolumeClaim pvc-94cqf found but phase is Pending instead of Bound. Jan 20 17:17:40.571: INFO: PersistentVolumeClaim pvc-94cqf found and phase=Bound (12.219215098s) Jan 20 17:17:40.571: INFO: Waiting up to 3m0s for PersistentVolume local-tt5jh to have phase Bound Jan 20 17:17:40.602: INFO: PersistentVolume local-tt5jh found and phase=Bound (30.698058ms) �[1mSTEP:�[0m Creating pod �[38;5;243m01/20/23 17:17:40.663�[0m Jan 20 17:17:40.698: INFO: Waiting up to 5m0s for pod "pod-64151201-04da-4623-bd89-6340dd1560b0" in namespace "volumemode-9142" to be "running" Jan 20 17:17:40.728: INFO: Pod "pod-64151201-04da-4623-bd89-6340dd1560b0": Phase="Pending", Reason="", readiness=false. Elapsed: 30.437124ms Jan 20 17:17:42.759: INFO: Pod "pod-64151201-04da-4623-bd89-6340dd1560b0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061561486s Jan 20 17:17:44.760: INFO: Pod "pod-64151201-04da-4623-bd89-6340dd1560b0": Phase="Running", Reason="", readiness=true. Elapsed: 4.062150824s Jan 20 17:17:44.760: INFO: Pod "pod-64151201-04da-4623-bd89-6340dd1560b0" satisfied condition "running" �[1mSTEP:�[0m Listing mounted volumes in the pod �[38;5;243m01/20/23 17:17:44.821�[0m Jan 20 17:17:44.856: INFO: Waiting up to 5m0s for pod "hostexec-i-03af3dbca738ba168-jwsbt" in namespace "volumemode-9142" to be "running" Jan 20 17:17:44.886: INFO: Pod "hostexec-i-03af3dbca738ba168-jwsbt": Phase="Pending", Reason="", readiness=false. Elapsed: 30.332693ms Jan 20 17:17:46.917: INFO: Pod "hostexec-i-03af3dbca738ba168-jwsbt": Phase="Running", Reason="", readiness=true. Elapsed: 2.061422223s Jan 20 17:17:46.917: INFO: Pod "hostexec-i-03af3dbca738ba168-jwsbt" satisfied condition "running" Jan 20 17:17:46.917: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c test ! -d /var/lib/kubelet/pods/637c65a5-3433-406b-96f0-95ef1e1a5f8f/volumes] Namespace:volumemode-9142 PodName:hostexec-i-03af3dbca738ba168-jwsbt ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jan 20 17:17:46.917: INFO: >>> kubeConfig: /root/.kube/config Jan 20 17:17:46.918: INFO: ExecWithOptions: Clientset creation Jan 20 17:17:46.918: INFO: ExecWithOptions: execute(POST https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/volumemode-9142/pods/hostexec-i-03af3dbca738ba168-jwsbt/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=test+%21+-d+%2Fvar%2Flib%2Fkubelet%2Fpods%2F637c65a5-3433-406b-96f0-95ef1e1a5f8f%2Fvolumes&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) Jan 20 17:17:47.192: INFO: exec i-03af3dbca738ba168: command: test ! -d /var/lib/kubelet/pods/637c65a5-3433-406b-96f0-95ef1e1a5f8f/volumes Jan 20 17:17:47.192: INFO: exec i-03af3dbca738ba168: stdout: "" Jan 20 17:17:47.192: INFO: exec i-03af3dbca738ba168: stderr: "" Jan 20 17:17:47.192: INFO: exec i-03af3dbca738ba168: exit code: 0 Jan 20 17:17:47.192: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c find /var/lib/kubelet/pods/637c65a5-3433-406b-96f0-95ef1e1a5f8f/volumes -mindepth 2 -maxdepth 2] Namespace:volumemode-9142 PodName:hostexec-i-03af3dbca738ba168-jwsbt ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jan 20 17:17:47.192: INFO: >>> kubeConfig: /root/.kube/config Jan 20 17:17:47.194: INFO: ExecWithOptions: Clientset creation Jan 20 17:17:47.194: INFO: ExecWithOptions: execute(POST https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/volumemode-9142/pods/hostexec-i-03af3dbca738ba168-jwsbt/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=find+%2Fvar%2Flib%2Fkubelet%2Fpods%2F637c65a5-3433-406b-96f0-95ef1e1a5f8f%2Fvolumes+-mindepth+2+-maxdepth+2&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) Jan 20 17:17:47.470: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c test ! -d /var/lib/kubelet/pods/637c65a5-3433-406b-96f0-95ef1e1a5f8f/volumeDevices] Namespace:volumemode-9142 PodName:hostexec-i-03af3dbca738ba168-jwsbt ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jan 20 17:17:47.470: INFO: >>> kubeConfig: /root/.kube/config Jan 20 17:17:47.471: INFO: ExecWithOptions: Clientset creation Jan 20 17:17:47.471: INFO: ExecWithOptions: execute(POST https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/volumemode-9142/pods/hostexec-i-03af3dbca738ba168-jwsbt/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=test+%21+-d+%2Fvar%2Flib%2Fkubelet%2Fpods%2F637c65a5-3433-406b-96f0-95ef1e1a5f8f%2FvolumeDevices&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) �[1mSTEP:�[0m Checking that volume plugin kubernetes.io/local-volume is not used in pod directory �[38;5;243m01/20/23 17:17:47.74�[0m �[1mSTEP:�[0m Deleting pod hostexec-i-03af3dbca738ba168-jwsbt in namespace volumemode-9142 �[38;5;243m01/20/23 17:17:47.74�[0m Jan 20 17:17:47.779: INFO: Deleting pod "pod-64151201-04da-4623-bd89-6340dd1560b0" in namespace "volumemode-9142" Jan 20 17:17:47.816: INFO: Wait up to 5m0s for pod "pod-64151201-04da-4623-bd89-6340dd1560b0" to be fully deleted �[1mSTEP:�[0m Deleting pv and pvc �[38;5;243m01/20/23 17:17:49.877�[0m Jan 20 17:17:49.877: INFO: Deleting PersistentVolumeClaim "pvc-94cqf" Jan 20 17:17:49.909: INFO: Deleting PersistentVolume "local-tt5jh" Jan 20 17:17:49.945: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount /tmp/local-driver-2228bd84-dd08-430a-bd88-295e27e48850] Namespace:volumemode-9142 PodName:hostexec-i-03af3dbca738ba168-j6jt4 ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jan 20 17:17:49.945: INFO: >>> kubeConfig: /root/.kube/config Jan 20 17:17:49.946: INFO: ExecWithOptions: Clientset creation Jan 20 17:17:49.946: INFO: ExecWithOptions: execute(POST https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/volumemode-9142/pods/hostexec-i-03af3dbca738ba168-j6jt4/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=umount+%2Ftmp%2Flocal-driver-2228bd84-dd08-430a-bd88-295e27e48850&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) Jan 20 17:17:50.206: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-driver-2228bd84-dd08-430a-bd88-295e27e48850/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:volumemode-9142 PodName:hostexec-i-03af3dbca738ba168-j6jt4 ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jan 20 17:17:50.206: INFO: >>> kubeConfig: /root/.kube/config Jan 20 17:17:50.207: INFO: ExecWithOptions: Clientset creation Jan 20 17:17:50.207: INFO: ExecWithOptions: execute(POST https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/volumemode-9142/pods/hostexec-i-03af3dbca738ba168-j6jt4/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=E2E_LOOP_DEV%3D%24%28losetup+%7C+grep+%2Ftmp%2Flocal-driver-2228bd84-dd08-430a-bd88-295e27e48850%2Ffile+%7C+awk+%27%7B+print+%241+%7D%27%29+2%3E%261+%3E+%2Fdev%2Fnull+%26%26+echo+%24%7BE2E_LOOP_DEV%7D&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) �[1mSTEP:�[0m Tear down block device "/dev/loop3" on node "i-03af3dbca738ba168" at path /tmp/local-driver-2228bd84-dd08-430a-bd88-295e27e48850/file �[38;5;243m01/20/23 17:17:50.461�[0m Jan 20 17:17:50.461: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop3] Namespace:volumemode-9142 PodName:hostexec-i-03af3dbca738ba168-j6jt4 ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jan 20 17:17:50.461: INFO: >>> kubeConfig: /root/.kube/config Jan 20 17:17:50.462: INFO: ExecWithOptions: Clientset creation Jan 20 17:17:50.462: INFO: ExecWithOptions: execute(POST https://api.e2e-e2e-kops-grid-flannel-flatcar-k26-ko26.test-cncf-aws.k8s.io/api/v1/namespaces/volumemode-9142/pods/hostexec-i-03af3dbca738ba168-j6jt4/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=losetup+-d+%2Fdev%2Floop3&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) Jan 20 17:17:50.890: INFO: exec i-03af3dbca738ba168: command: losetup -d /dev/loop3 Jan 20 17:17:50.890: INFO: exec i-03af3dbca738ba168: stdout: "" Jan 20 17:17:50.890: INFO: exec i-03af3dbca738ba168: stderr: "" Jan 20 17:17:50.890: INFO: exec i-03af3dbca738ba168: exit code: 0 Jan 20 17:17:50.890: INFO: Unexpected error: <*errors.errorString | 0xc00126f0f0>: { s: "Internal error occurred: error executing command in container: failed to exec in container: failed to create exec \"fd75afe45696ed82ecef9c20f1132aa97b9c785682dd3eaf5dd51eba909a37f5\": cannot exec in a deleted state: unknown", } Jan 20 17:17:50.890: FAIL: Internal error occurred: error executing command in container: failed to exec in container: failed to create exec "fd75afe45696ed82ecef9c20f1132aa97b9c785682dd3eaf5dd51eba909a37f5": cannot exec in a deleted state: unknown Full Stack Trace k8s.io/kubernetes/test/e2e/storage/utils.(*ltrMgr).teardownLoopDevice(0xc002e9f3e0, {0xc0027d8a40, 0x36}, 0xc0019a6800) test/e2e/storage/utils/local.go:162 +0x176 k8s.io/kubernetes/test/e2e/storage/utils.(*ltrMgr).cleanupLocalVolumeBlock(0xc002e9f3e0, 0xc000fd6c40) test/e2e/storage/utils/local.go:167 +0x36 k8s.io/kubernetes/test/e2e/storage/utils.(*ltrMgr).cleanupLocalVolumeBlockFS(0xc002e9f3e0, 0xc000fd6c40) test/e2e/storage/utils/local.go:193 +0xbf k8s.io/kubernetes/test/e2e/storage/utils.(*ltrMgr).Remove(0xc0013912c0?, 0x7553220?) test/e2e/storage/utils/local.go:353 +0xc5 k8s.io/kubernetes/test/e2e/storage/drivers.(*localVolume).DeleteVolume(0x13?) test/e2e/storage/drivers/in_tree.go:1760 +0x28 k8s.io/kubernetes/test/e2e/storage/utils.TryFunc(0x8022ee8?) test/e2e/storage/utils/utils.go:748 +0x6d k8s.io/kubernetes/test/e2e/storage/framework.(*VolumeResource).CleanupResource(0xc001564f28) test/e2e/storage/framework/volume_resource.go:236 +0xc89 k8s.io/kubernetes/test/e2e/storage/testsuites.(*volumeModeTestSuite).DefineTests.func3() test/e2e/storage/testsuites/volumemode.go:187 +0x3e k8s.io/kubernetes/test/e2e/storage/testsuites.(*volumeModeTestSuite).DefineTests.func7() test/e2e/storage/testsuites/volumemode.go:416 +0x9fd [AfterEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode test/e2e/framework/node/init/init.go:32 Jan 20 17:17:50.890: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready Jan 20 17:17:50.922: INFO: Condition Ready of node i-03af3dbca738ba168 is false instead of true. Reason: KubeletNotReady, message: node is shutting down Jan 20 17:17:50.922: INFO: Condition Ready of node i-048afc59cd0c5fa4a is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-20 17:17:19 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-20 17:17:23 +0000 UTC}]. Failure Jan 20 17:17:52.956: INFO: Condition Ready of node i-03af3dbca738ba168 is false instead of true. Reason: KubeletNotReady, message: node is shutting down Jan 20 17:17:52.956: INFO: Condition Ready of node i-048afc59cd0c5fa4a is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-20 17:17:19 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-20 17:17:23 +0000 UTC}]. Failure Jan 20 17:17:54.954: INFO: Condition Ready of node i-03af3dbca738ba168 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-20 17:17:50 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-20 17:17:53 +0000 UTC}]. Failure Jan 20 17:17:54.954: INFO: Condition Ready of node i-048afc59cd0c5fa4a is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-20 17:17:19 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-20 17:17:23 +0000 UTC}]. Failure Jan 20 17:17:56.955: INFO: Condition Ready of node i-03af3dbca738ba168 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-20 17:17:50 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-20 17:17:53 +0000 UTC}]. Failure Jan 20 17:17:56.955: INFO: Condition Ready of node i-048afc59cd0c5fa4a is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-20 17:17:19 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-20 17:17:23 +0000 UTC}]. Failure Jan 20 17:17:58.956: INFO: Condition Ready of node i-03af3dbca738ba168 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-20 17:17:50 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-20 17:17:53 +0000 UTC}]. Failure Jan 20 17:17:58.956: INFO: Condition Ready of node i-048afc59cd0c5fa4a is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-20 17:17:19 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-20 17:17:23 +0000 UTC}]. Failure Jan 20 17:18:00.954: INFO: Condition Ready of node i-03af3dbca738ba168 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-20 17:17:50 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-20 17:17:53 +0000 UTC}]. Failure Jan 20 17:18:00.954: INFO: Condition Ready of node i-048afc59cd0c5fa4a is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-20 17:17:19 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-20 17:17:23 +0000 UTC}]. Failure Jan 20 17:18:02.954: INFO: Condition Ready of node i-03af3dbca738ba168 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-20 17:17:50 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-20 17:17:53 +0000 UTC}]. Failure Jan 20 17:18:02.954: INFO: Condition Ready of node i-048afc59cd0c5fa4a is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-20 17:17:19 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-20 17:17:23 +0000 UTC}]. Failure Jan 20 17:18:04.954: INFO: Condition Ready of node i-03af3dbca738ba168 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-20 17:17:50 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-20 17:17:53 +0000 UTC}]. Failure Jan 20 17:18:04.954: INFO: Condition Ready of node i-048afc59cd0c5fa4a is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-20 17:17:19 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-20 17:17:23 +0000 UTC}]. Failure Jan 20 17:18:06.955: INFO: Condition Ready of node i-03af3dbca738ba168 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-20 17:17:50 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-20 17:17:53 +0000 UTC}]. Failure Jan 20 17:18:06.955: INFO: Condition Ready of node i-048afc59cd0c5fa4a is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-20 17:17:19 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-20 17:17:23 +0000 UTC}]. Failure Jan 20 17:18:08.954: INFO: Condition Ready of node i-03af3dbca738ba168 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-20 17:17:50 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-20 17:17:53 +0000 UTC}]. Failure Jan 20 17:18:10.955: INFO: Condition Ready of node i-03af3dbca738ba168 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-20 17:17:50 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-20 17:17:53 +0000 UTC}]. Failure Jan 20 17:18:12.954: INFO: Condition Ready of node i-03af3dbca738ba168 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-20 17:17:50 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-20 17:17:53 +0000 UTC}]. Failure Jan 20 17:18:14.954: INFO: Condition Ready of node i-03af3dbca738ba168 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-20 17:17:50 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-20 17:17:53 +0000 UTC}]. Failure Jan 20 17:18:16.954: INFO: Condition Ready of node i-03af3dbca738ba168 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-20 17:17:50 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-20 17:17:53 +0000 UTC}]. Failure Jan 20 17:18:18.955: INFO: Condition Ready of node i-03af3dbca738ba168 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-20 17:17:50 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-20 17:17:53 +0000 UTC}]. Failure Jan 20 17:18:20.954: INFO: Condition Ready of node i-03af3dbca738ba168 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule 2023-01-20 17:17:50 +0000 UTC} {node.kubernetes.io/not-ready NoExecute 2023-01-20 17:17:53 +0000 UTC}]. Failure Jan 20 17:18:22.954: INFO: Condition Ready of node i-03af3dbca738ba168 is true, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoExecute 2023-01-20 17:17:53 +0000 UTC}]. Failure [DeferCleanup (Each)] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode test/e2e/storage/drivers/in_tree.go:1734 �[1mSTEP:�[0m Deleting pod hostexec-i-03af3dbca738ba168-j6jt4 in namespace volumemode-9142 �[38;5;243m01/20/23 17:18:24.971�[0m [DeferCleanup (Each)] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode dump namespaces | framework.go:196 �[1mSTEP:�[0m dump namespace information after failure �[38;5;243m01/20/23 17:18:25.057�[0m �[1mSTEP:�[0m Collecting events from namespace "volumemode-9142". �[38;5;243m01/20/23 17:18:25.057�[0m �[1mSTEP:�[0m Found 17 events. �[38;5;243m01/20/23 17:18:25.127�[0m Jan 20 17:18:25.127: INFO: At 2023-01-20 17:17:11 +0000 UTC - event for hostexec-i-03af3dbca738ba168-j6jt4: {default-scheduler } Scheduled: Successfully assigned volumemode-9142/hostexec-i-03af3dbca738ba168-j6jt4 to i-03af3dbca738ba168 Jan 20 17:18:25.127: INFO: At 2023-01-20 17:17:16 +0000 UTC - event for hostexec-i-03af3dbca738ba168-j6jt4: {kubelet i-03af3dbca738ba168} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.43" already present on machine Jan 20 17:18:25.127: INFO: At 2023-01-20 17:17:16 +0000 UTC - event for hostexec-i-03af3dbca738ba168-j6jt4: {kubelet i-03af3dbca738ba168} Created: Created container agnhost-container Jan 20 17:18:25.127: INFO: At 2023-01-20 17:17:16 +0000 UTC - event for hostexec-i-03af3dbca738ba168-j6jt4: {kubelet i-03af3dbca738ba168} Started: Started container agnhost-container Jan 20 17:18:25.127: INFO: At 2023-01-20 17:17:28 +0000 UTC - event for pvc-94cqf: {persistentvolume-controller } ProvisioningFailed: storageclass.storage.k8s.io "volumemode-9142" not found Jan 20 17:18:25.127: INFO: At 2023-01-20 17:17:40 +0000 UTC - event for pod-64151201-04da-4623-bd89-6340dd1560b0: {default-scheduler } Scheduled: Successfully assigned volumemode-9142/pod-64151201-04da-4623-bd89-6340dd1560b0 to i-03af3dbca738ba168 Jan 20 17:18:25.127: INFO: At 2023-01-20 17:17:41 +0000 UTC - event for pod-64151201-04da-4623-bd89-6340dd1560b0: {kubelet i-03af3dbca738ba168} Started: Started container write-pod Jan 20 17:18:25.127: INFO: At 2023-01-20 17:17:41 +0000 UTC - event for pod-64151201-04da-4623-bd89-6340dd1560b0: {kubelet i-03af3dbca738ba168} Created: Created container write-pod Jan 20 17:18:25.127: INFO: At 2023-01-20 17:17:41 +0000 UTC - event for pod-64151201-04da-4623-bd89-6340dd1560b0: {kubelet i-03af3dbca738ba168} Pulled: Container image "registry.k8s.io/e2e-test-images/busybox:1.29-4" already present on machine Jan 20 17:18:25.127: INFO: At 2023-01-20 17:17:44 +0000 UTC - event for hostexec-i-03af3dbca738ba168-jwsbt: {default-scheduler } Scheduled: Successfully assigned volumemode-9142/hostexec-i-03af3dbca738ba168-jwsbt to i-03af3dbca738ba168 Jan 20 17:18:25.127: INFO: At 2023-01-20 17:17:45 +0000 UTC - event for hostexec-i-03af3dbca738ba168-jwsbt: {kubelet i-03af3dbca738ba168} Created: Created container agnhost-container Jan 20 17:18:25.127: INFO: At 2023-01-20 17:17:45 +0000 UTC - event for hostexec-i-03af3dbca738ba168-jwsbt: {kubelet i-03af3dbca738ba168} Started: Started container agnhost-container Jan 20 17:18:25.127: INFO: At 2023-01-20 17:17:45 +0000 UTC - event for hostexec-i-03af3dbca738ba168-jwsbt: {kubelet i-03af3dbca738ba168} Pulled: Container image "registry.k8s.io/e2e-test-images/agnhost:2.43" already present on machine Jan 20 17:18:25.127: INFO: At 2023-01-20 17:17:47 +0000 UTC - event for pod-64151201-04da-4623-bd89-6340dd1560b0: {kubelet i-03af3dbca738ba168} Killing: Stopping container write-pod Jan 20 17:18:25.127: INFO: At 2023-01-20 17:17:48 +0000 UTC - event for hostexec-i-03af3dbca738ba168-jwsbt: {kubelet i-03af3dbca738ba168} Killing: Stopping container agnhost-container Jan 20 17:18:25.127: INFO: At 2023-01-20 17:17:50 +0000 UTC - event for hostexec-i-03af3dbca738ba168-j6jt4: {kubelet i-03af3dbca738ba168} Killing: Stopping container agnhost-container Jan 20 17:18:25.127: INFO: At 2023-01-20 17:18:23 +0000 UTC - event for hostexec-i-03af3dbca738ba168-j6jt4: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod volumemode-9142/hostexec-i-03af3dbca738ba168-j6jt4 Jan 20 17:18:25.181: INFO: POD NODE PHASE GRACE CONDITIONS Jan 20 17:18:25.181: INFO: Jan 20 17:18:25.222: INFO: Logging node info for node i-02cae73514916eb60 Jan 20 17:18:25.266: INFO: Node Info: &Node{ObjectMeta:{i-02cae73514916eb60 6d0a8063-275e-4cb5-a7e1-ecf07fb2d810 6920 0 2023-01-20 17:06:32 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:c5.large beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:us-east-1 failure-domain.beta.kubernetes.io/zone:us-east-1a kops.k8s.io/kops-controller-pki: kubernetes.io/arch:amd64 kubernetes.io/hostname:i-02cae73514916eb60 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node.kubernetes.io/exclude-from-external-load-balancers: node.kubernetes.io/instance-type:c5.large topology.ebs.csi.aws.com/zone:us-east-1a topology.kubernetes.io/region:us-east-1 topology.kubernetes.io/zone:us-east-1a] map[csi.volume.kubernetes.io/nodeid:{"ebs.csi.aws.com":"i-02cae73514916eb60"} flannel.alpha.coreos.com/backend-data:{"VNI":1,"VtepMAC":"e6:28:1d:38:9c:ba"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:172.20.51.65 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-20 17:06:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}}} } {kube-controller-manager Update v1 2023-01-20 17:07:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.0.0/24\"":{}}}} } {protokube Update v1 2023-01-20 17:07:01 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kops.k8s.io/kops-controller-pki":{},"f:node-role.kubernetes.io/control-plane":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}} } {aws-cloud-controller-manager Update v1 2023-01-20 17:07:06 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:beta.kubernetes.io/instance-type":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:taints":{}}} } {aws-cloud-controller-manager Update v1 2023-01-20 17:07:06 +0000 UTC FieldsV1 {"f:status":{"f:addresses":{"k:{\"type\":\"ExternalDNS\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"ExternalIP\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"Hostname\"}":{"f:address":{}},"k:{\"type\":\"InternalDNS\"}":{".":{},"f:address":{},"f:type":{}}}}} status} {kubelet Update v1 2023-01-20 17:16:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{}}},"f:status":{"f:allocatable":{"f:memory":{}},"f:capacity":{"f:memory":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{},"f:kernelVersion":{},"f:osImage":{}}}} status} {flanneld Update v1 2023-01-20 17:16:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status}]},Spec:NodeSpec{PodCIDR:100.96.0.0/24,DoNotUseExternalID:,ProviderID:aws:///us-east-1a/i-02cae73514916eb60,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/control-plane,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[100.96.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{47441653760 0} {<nil>} 46329740Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3895427072 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{42697488314 0} {<nil>} 42697488314 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3790569472 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-20 17:16:23 +0000 UTC,LastTransitionTime:2023-01-20 17:16:23 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-20 17:16:15 +0000 UTC,LastTransitionTime:2023-01-20 17:06:30 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-20 17:16:15 +0000 UTC,LastTransitionTime:2023-01-20 17:06:30 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-20 17:16:15 +0000 UTC,LastTransitionTime:2023-01-20 17:06:30 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-20 17:16:15 +0000 UTC,LastTransitionTime:2023-01-20 17:16:15 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.51.65,},NodeAddress{Type:ExternalIP,Address:100.26.139.144,},NodeAddress{Type:InternalDNS,Address:i-02cae73514916eb60.ec2.internal,},NodeAddress{Type:Hostname,Address:i-02cae73514916eb60.ec2.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-100-26-139-144.compute-1.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec26604dd0f376256ae652d6e661c235,SystemUUID:ec26604d-d0f3-7625-6ae6-52d6e661c235,BootID:a089a900-b2da-4d1d-8de1-3fdf21e97305,KernelVersion:5.15.86-flatcar,OSImage:Flatcar Container Linux by Kinvolk 3446.1.0 (Oklo),ContainerRuntimeVersion:containerd://1.6.10,KubeletVersion:v1.26.1,KubeProxyVersion:v1.26.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/etcdadm/etcd-manager@sha256:66a453db625abb268f4b3bbefc5a34a171d81e6e8796cecca54cfd71775c77c4 registry.k8s.io/etcdadm/etcd-manager:v3.0.20221209],SizeBytes:231502799,},ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.26.1],SizeBytes:135178704,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.26.1],SizeBytes:124995897,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.26.1],SizeBytes:67205316,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.26.1],SizeBytes:57661752,},ContainerImage{Names:[registry.k8s.io/kops/kops-controller:1.26.0-beta.2],SizeBytes:43191763,},ContainerImage{Names:[registry.k8s.io/kops/dns-controller:1.26.0-beta.2],SizeBytes:42821714,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:f0c5de192d832e7c1daa6580d4a62e8fa6fc8eabc0917ae4cb7ed4d15e95b59e registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.1],SizeBytes:29725845,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:122bfb8c1edabb3c0edd63f06523e6940d958d19b3957dc7b1d6f81e9f1f6119 registry.k8s.io/sig-storage/csi-provisioner:v3.1.0],SizeBytes:23345856,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:9ebbf9f023e7b41ccee3d52afe39a89e3ddacdbb69269d583abfc25847cfd9e4 registry.k8s.io/sig-storage/csi-resizer:v1.4.0],SizeBytes:22381475,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:8b9c313c05f54fb04f8d430896f5f5904b6cb157df261501b29adc04d2b2dc7b registry.k8s.io/sig-storage/csi-attacher:v3.4.0],SizeBytes:22085298,},ContainerImage{Names:[docker.io/rancher/mirrored-flannelcni-flannel@sha256:c9786f434d4663c924aeca1a2e479786d63df0d56c5d6bd62a64915f81d62ff0 docker.io/rancher/mirrored-flannelcni-flannel:v0.19.2],SizeBytes:20503771,},ContainerImage{Names:[registry.k8s.io/provider-aws/cloud-controller-manager@sha256:fdeb61e3e42ecd9cca868d550ebdb88dd6341d9e91fcfa9a37e227dab2ad22cb registry.k8s.io/provider-aws/cloud-controller-manager:v1.26.0],SizeBytes:20154862,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:8240918,},ContainerImage{Names:[registry.k8s.io/kops/kube-apiserver-healthcheck:1.26.0-beta.2],SizeBytes:4965793,},ContainerImage{Names:[docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin@sha256:28d3a6be9f450282bf42e4dad143d41da23e3d91f66f19c01ee7fd21fd17cb2b docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.0],SizeBytes:3821285,},ContainerImage{Names:[registry.k8s.io/pause@sha