Result | FAILURE |
Tests | 35 failed / 749 succeeded |
Started | |
Elapsed | 45m44s |
Revision | master |
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\sExternal\sStorage\s\[Driver\:\sebs\.csi\.aws\.com\]\s\[Testpattern\:\sDynamic\sPV\s\(default\sfs\)\]\sfsgroupchangepolicy\s\(Always\)\[LinuxOnly\]\,\spod\screated\swith\san\sinitial\sfsgroup\,\svolume\scontents\sownership\schanged\svia\schgrp\sin\sfirst\spod\,\snew\spod\swith\ssame\sfsgroup\sapplied\sto\sthe\svolume\scontents$'
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/fsgroupchangepolicy.go:214 Jan 30 22:55:26.739: Unexpected error: <*errors.errorString | 0xc005702400>: { s: "pod \"pod-9361d956-3a9e-45fa-92dc-ac8884faccaa\" is not Running: timed out waiting for the condition", } pod "pod-9361d956-3a9e-45fa-92dc-ac8884faccaa" is not Running: timed out waiting for the condition occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/fsgroupchangepolicy.go:262from junit_06.xml
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 30 22:48:53.206: INFO: >>> kubeConfig: /root/.kube/config �[1mSTEP�[0m: Building a namespace api object, basename fsgroupchangepolicy �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with same fsgroup applied to the volume contents /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/fsgroupchangepolicy.go:214 Jan 30 22:48:54.205: INFO: Creating resource for dynamic PV Jan 30 22:48:54.205: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(ebs.csi.aws.com) supported size:{ 1Mi} �[1mSTEP�[0m: creating a StorageClass fsgroupchangepolicy-3075-e2e-scqphkv �[1mSTEP�[0m: creating a claim Jan 30 22:48:54.349: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil �[1mSTEP�[0m: Creating Pod in namespace fsgroupchangepolicy-3075 with fsgroup 1000 Jan 30 22:50:11.216: INFO: Pod fsgroupchangepolicy-3075/pod-834b9c41-ef01-4e0b-b9fc-0f86ce4cb398 started successfully �[1mSTEP�[0m: Creating a sub-directory and file, and verifying their ownership is 1000 Jan 30 22:50:11.216: INFO: ExecWithOptions {Command:[/bin/sh -c touch /mnt/volume1/file1] Namespace:fsgroupchangepolicy-3075 PodName:pod-834b9c41-ef01-4e0b-b9fc-0f86ce4cb398 ContainerName:write-pod Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 30 22:50:11.216: INFO: >>> kubeConfig: /root/.kube/config Jan 30 22:50:11.217: INFO: ExecWithOptions: Clientset creation Jan 30 22:50:11.217: INFO: ExecWithOptions: execute(POST https://api.e2e-e2e-kops-grid-cilium-eni-u2204-k23-ko26-docker.test-cncf-aws.k8s.io/api/v1/namespaces/fsgroupchangepolicy-3075/pods/pod-834b9c41-ef01-4e0b-b9fc-0f86ce4cb398/exec?command=%2Fbin%2Fsh&command=-c&command=touch+%2Fmnt%2Fvolume1%2Ffile1&container=write-pod&container=write-pod&stderr=true&stdout=true %!s(MISSING)) Jan 30 22:50:12.337: INFO: ExecWithOptions {Command:[/bin/sh -c ls -l /mnt/volume1/file1] Namespace:fsgroupchangepolicy-3075 PodName:pod-834b9c41-ef01-4e0b-b9fc-0f86ce4cb398 ContainerName:write-pod Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 30 22:50:12.337: INFO: >>> kubeConfig: /root/.kube/config Jan 30 22:50:12.338: INFO: ExecWithOptions: Clientset creation Jan 30 22:50:12.338: INFO: ExecWithOptions: execute(POST https://api.e2e-e2e-kops-grid-cilium-eni-u2204-k23-ko26-docker.test-cncf-aws.k8s.io/api/v1/namespaces/fsgroupchangepolicy-3075/pods/pod-834b9c41-ef01-4e0b-b9fc-0f86ce4cb398/exec?command=%2Fbin%2Fsh&command=-c&command=ls+-l+%2Fmnt%2Fvolume1%2Ffile1&container=write-pod&container=write-pod&stderr=true&stdout=true %!s(MISSING)) Jan 30 22:50:13.631: INFO: pod fsgroupchangepolicy-3075/pod-834b9c41-ef01-4e0b-b9fc-0f86ce4cb398 exec for cmd ls -l /mnt/volume1/file1, stdout: -rw-r--r-- 1 root 1000 0 Jan 30 22:50 /mnt/volume1/file1, stderr: Jan 30 22:50:13.631: INFO: stdout split: [-rw-r--r-- 1 root 1000 0 Jan 30 22:50 /mnt/volume1/file1], expected gid: 1000 Jan 30 22:50:13.631: INFO: ExecWithOptions {Command:[/bin/sh -c mkdir /mnt/volume1/subdir] Namespace:fsgroupchangepolicy-3075 PodName:pod-834b9c41-ef01-4e0b-b9fc-0f86ce4cb398 ContainerName:write-pod Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 30 22:50:13.631: INFO: >>> kubeConfig: /root/.kube/config Jan 30 22:50:13.632: INFO: ExecWithOptions: Clientset creation Jan 30 22:50:13.632: INFO: ExecWithOptions: execute(POST https://api.e2e-e2e-kops-grid-cilium-eni-u2204-k23-ko26-docker.test-cncf-aws.k8s.io/api/v1/namespaces/fsgroupchangepolicy-3075/pods/pod-834b9c41-ef01-4e0b-b9fc-0f86ce4cb398/exec?command=%2Fbin%2Fsh&command=-c&command=mkdir+%2Fmnt%2Fvolume1%2Fsubdir&container=write-pod&container=write-pod&stderr=true&stdout=true %!s(MISSING)) Jan 30 22:50:14.873: INFO: ExecWithOptions {Command:[/bin/sh -c touch /mnt/volume1/subdir/file2] Namespace:fsgroupchangepolicy-3075 PodName:pod-834b9c41-ef01-4e0b-b9fc-0f86ce4cb398 ContainerName:write-pod Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 30 22:50:14.873: INFO: >>> kubeConfig: /root/.kube/config Jan 30 22:50:14.873: INFO: ExecWithOptions: Clientset creation Jan 30 22:50:14.874: INFO: ExecWithOptions: execute(POST https://api.e2e-e2e-kops-grid-cilium-eni-u2204-k23-ko26-docker.test-cncf-aws.k8s.io/api/v1/namespaces/fsgroupchangepolicy-3075/pods/pod-834b9c41-ef01-4e0b-b9fc-0f86ce4cb398/exec?command=%2Fbin%2Fsh&command=-c&command=touch+%2Fmnt%2Fvolume1%2Fsubdir%2Ffile2&container=write-pod&container=write-pod&stderr=true&stdout=true %!s(MISSING)) Jan 30 22:50:16.078: INFO: ExecWithOptions {Command:[/bin/sh -c ls -l /mnt/volume1/subdir/file2] Namespace:fsgroupchangepolicy-3075 PodName:pod-834b9c41-ef01-4e0b-b9fc-0f86ce4cb398 ContainerName:write-pod Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 30 22:50:16.079: INFO: >>> kubeConfig: /root/.kube/config Jan 30 22:50:16.080: INFO: ExecWithOptions: Clientset creation Jan 30 22:50:16.080: INFO: ExecWithOptions: execute(POST https://api.e2e-e2e-kops-grid-cilium-eni-u2204-k23-ko26-docker.test-cncf-aws.k8s.io/api/v1/namespaces/fsgroupchangepolicy-3075/pods/pod-834b9c41-ef01-4e0b-b9fc-0f86ce4cb398/exec?command=%2Fbin%2Fsh&command=-c&command=ls+-l+%2Fmnt%2Fvolume1%2Fsubdir%2Ffile2&container=write-pod&container=write-pod&stderr=true&stdout=true %!s(MISSING)) Jan 30 22:50:17.257: INFO: pod fsgroupchangepolicy-3075/pod-834b9c41-ef01-4e0b-b9fc-0f86ce4cb398 exec for cmd ls -l /mnt/volume1/subdir/file2, stdout: -rw-r--r-- 1 root 1000 0 Jan 30 22:50 /mnt/volume1/subdir/file2, stderr: Jan 30 22:50:17.257: INFO: stdout split: [-rw-r--r-- 1 root 1000 0 Jan 30 22:50 /mnt/volume1/subdir/file2], expected gid: 1000 �[1mSTEP�[0m: Changing the root directory file ownership to 2000 Jan 30 22:50:17.257: INFO: ExecWithOptions {Command:[/bin/sh -c chgrp 2000 /mnt/volume1/file1] Namespace:fsgroupchangepolicy-3075 PodName:pod-834b9c41-ef01-4e0b-b9fc-0f86ce4cb398 ContainerName:write-pod Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 30 22:50:17.258: INFO: >>> kubeConfig: /root/.kube/config Jan 30 22:50:17.258: INFO: ExecWithOptions: Clientset creation Jan 30 22:50:17.258: INFO: ExecWithOptions: execute(POST https://api.e2e-e2e-kops-grid-cilium-eni-u2204-k23-ko26-docker.test-cncf-aws.k8s.io/api/v1/namespaces/fsgroupchangepolicy-3075/pods/pod-834b9c41-ef01-4e0b-b9fc-0f86ce4cb398/exec?command=%2Fbin%2Fsh&command=-c&command=chgrp+2000+%2Fmnt%2Fvolume1%2Ffile1&container=write-pod&container=write-pod&stderr=true&stdout=true %!s(MISSING)) Jan 30 22:50:18.384: INFO: ExecWithOptions {Command:[/bin/sh -c ls -l /mnt/volume1/file1] Namespace:fsgroupchangepolicy-3075 PodName:pod-834b9c41-ef01-4e0b-b9fc-0f86ce4cb398 ContainerName:write-pod Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 30 22:50:18.384: INFO: >>> kubeConfig: /root/.kube/config Jan 30 22:50:18.385: INFO: ExecWithOptions: Clientset creation Jan 30 22:50:18.385: INFO: ExecWithOptions: execute(POST https://api.e2e-e2e-kops-grid-cilium-eni-u2204-k23-ko26-docker.test-cncf-aws.k8s.io/api/v1/namespaces/fsgroupchangepolicy-3075/pods/pod-834b9c41-ef01-4e0b-b9fc-0f86ce4cb398/exec?command=%2Fbin%2Fsh&command=-c&command=ls+-l+%2Fmnt%2Fvolume1%2Ffile1&container=write-pod&container=write-pod&stderr=true&stdout=true %!s(MISSING)) Jan 30 22:50:19.451: INFO: pod fsgroupchangepolicy-3075/pod-834b9c41-ef01-4e0b-b9fc-0f86ce4cb398 exec for cmd ls -l /mnt/volume1/file1, stdout: -rw-r--r-- 1 root 2000 0 Jan 30 22:50 /mnt/volume1/file1, stderr: Jan 30 22:50:19.451: INFO: stdout split: [-rw-r--r-- 1 root 2000 0 Jan 30 22:50 /mnt/volume1/file1], expected gid: 2000 �[1mSTEP�[0m: Changing the sub-directory file ownership to 3000 Jan 30 22:50:19.451: INFO: ExecWithOptions {Command:[/bin/sh -c chgrp 3000 /mnt/volume1/subdir/file2] Namespace:fsgroupchangepolicy-3075 PodName:pod-834b9c41-ef01-4e0b-b9fc-0f86ce4cb398 ContainerName:write-pod Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 30 22:50:19.451: INFO: >>> kubeConfig: /root/.kube/config Jan 30 22:50:19.452: INFO: ExecWithOptions: Clientset creation Jan 30 22:50:19.452: INFO: ExecWithOptions: execute(POST https://api.e2e-e2e-kops-grid-cilium-eni-u2204-k23-ko26-docker.test-cncf-aws.k8s.io/api/v1/namespaces/fsgroupchangepolicy-3075/pods/pod-834b9c41-ef01-4e0b-b9fc-0f86ce4cb398/exec?command=%2Fbin%2Fsh&command=-c&command=chgrp+3000+%2Fmnt%2Fvolume1%2Fsubdir%2Ffile2&container=write-pod&container=write-pod&stderr=true&stdout=true %!s(MISSING)) Jan 30 22:50:20.562: INFO: ExecWithOptions {Command:[/bin/sh -c ls -l /mnt/volume1/subdir/file2] Namespace:fsgroupchangepolicy-3075 PodName:pod-834b9c41-ef01-4e0b-b9fc-0f86ce4cb398 ContainerName:write-pod Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 30 22:50:20.562: INFO: >>> kubeConfig: /root/.kube/config Jan 30 22:50:20.562: INFO: ExecWithOptions: Clientset creation Jan 30 22:50:20.563: INFO: ExecWithOptions: execute(POST https://api.e2e-e2e-kops-grid-cilium-eni-u2204-k23-ko26-docker.test-cncf-aws.k8s.io/api/v1/namespaces/fsgroupchangepolicy-3075/pods/pod-834b9c41-ef01-4e0b-b9fc-0f86ce4cb398/exec?command=%2Fbin%2Fsh&command=-c&command=ls+-l+%2Fmnt%2Fvolume1%2Fsubdir%2Ffile2&container=write-pod&container=write-pod&stderr=true&stdout=true %!s(MISSING)) Jan 30 22:50:21.731: INFO: pod fsgroupchangepolicy-3075/pod-834b9c41-ef01-4e0b-b9fc-0f86ce4cb398 exec for cmd ls -l /mnt/volume1/subdir/file2, stdout: -rw-r--r-- 1 root 3000 0 Jan 30 22:50 /mnt/volume1/subdir/file2, stderr: Jan 30 22:50:21.731: INFO: stdout split: [-rw-r--r-- 1 root 3000 0 Jan 30 22:50 /mnt/volume1/subdir/file2], expected gid: 3000 �[1mSTEP�[0m: Deleting Pod fsgroupchangepolicy-3075/pod-834b9c41-ef01-4e0b-b9fc-0f86ce4cb398 Jan 30 22:50:21.731: INFO: Deleting pod "pod-834b9c41-ef01-4e0b-b9fc-0f86ce4cb398" in namespace "fsgroupchangepolicy-3075" Jan 30 22:50:21.877: INFO: Wait up to 5m0s for pod "pod-834b9c41-ef01-4e0b-b9fc-0f86ce4cb398" to be fully deleted �[1mSTEP�[0m: Creating Pod in namespace fsgroupchangepolicy-3075 with fsgroup 1000 Jan 30 22:55:26.739: FAIL: Unexpected error: <*errors.errorString | 0xc005702400>: { s: "pod \"pod-9361d956-3a9e-45fa-92dc-ac8884faccaa\" is not Running: timed out waiting for the condition", } pod "pod-9361d956-3a9e-45fa-92dc-ac8884faccaa" is not Running: timed out waiting for the condition occurred Full Stack Trace k8s.io/kubernetes/test/e2e/storage/testsuites.createPodAndVerifyContentGid(0xc00235c9a0, 0xc0057f9068, 0x0, {0xc0056d9298, 0x4}, {0xc0056d929c, 0x4}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/fsgroupchangepolicy.go:262 +0x151 k8s.io/kubernetes/test/e2e/storage/testsuites.(*fsGroupChangePolicyTestSuite).DefineTests.func3() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/fsgroupchangepolicy.go:251 +0x556 k8s.io/kubernetes/test/e2e.RunE2ETests(0x24eec17?) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:133 +0x6b7 k8s.io/kubernetes/test/e2e.TestE2E(0x24602d9?) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc0006bd040, 0x72ecb90) /usr/local/go/src/testing/testing.go:1446 +0x10b created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1493 +0x35f �[1mSTEP�[0m: Deleting pvc Jan 30 22:55:27.024: INFO: Deleting PersistentVolumeClaim "ebs.csi.aws.comsnjpg" Jan 30 22:55:27.168: INFO: Waiting up to 3m0s for PersistentVolume pvc-10eb92ba-2ec6-4aab-b7c5-5415956b1212 to get deleted Jan 30 22:55:27.309: INFO: PersistentVolume pvc-10eb92ba-2ec6-4aab-b7c5-5415956b1212 found and phase=Bound (141.758959ms) Jan 30 22:55:32.452: INFO: PersistentVolume pvc-10eb92ba-2ec6-4aab-b7c5-5415956b1212 found and phase=Bound (5.284672721s) Jan 30 22:55:37.595: INFO: PersistentVolume pvc-10eb92ba-2ec6-4aab-b7c5-5415956b1212 found and phase=Bound (10.427286215s) Jan 30 22:55:42.737: INFO: PersistentVolume pvc-10eb92ba-2ec6-4aab-b7c5-5415956b1212 found and phase=Bound (15.569741952s) Jan 30 22:55:47.880: INFO: PersistentVolume pvc-10eb92ba-2ec6-4aab-b7c5-5415956b1212 found and phase=Bound (20.711984601s) Jan 30 22:55:53.022: INFO: PersistentVolume pvc-10eb92ba-2ec6-4aab-b7c5-5415956b1212 found and phase=Bound (25.854611426s) Jan 30 22:55:58.164: INFO: PersistentVolume pvc-10eb92ba-2ec6-4aab-b7c5-5415956b1212 found and phase=Bound (30.996839642s) Jan 30 22:56:03.307: INFO: PersistentVolume pvc-10eb92ba-2ec6-4aab-b7c5-5415956b1212 found and phase=Bound (36.139345803s) Jan 30 22:56:08.453: INFO: PersistentVolume pvc-10eb92ba-2ec6-4aab-b7c5-5415956b1212 found and phase=Bound (41.285093065s) Jan 30 22:56:13.596: INFO: PersistentVolume pvc-10eb92ba-2ec6-4aab-b7c5-5415956b1212 found and phase=Bound (46.428820765s) Jan 30 22:56:18.740: INFO: PersistentVolume pvc-10eb92ba-2ec6-4aab-b7c5-5415956b1212 found and phase=Bound (51.572860152s) Jan 30 22:56:23.891: INFO: PersistentVolume pvc-10eb92ba-2ec6-4aab-b7c5-5415956b1212 found and phase=Bound (56.723483589s) Jan 30 22:56:29.033: INFO: PersistentVolume pvc-10eb92ba-2ec6-4aab-b7c5-5415956b1212 found and phase=Bound (1m1.865591244s) Jan 30 22:56:34.175: INFO: PersistentVolume pvc-10eb92ba-2ec6-4aab-b7c5-5415956b1212 found and phase=Bound (1m7.007742357s) Jan 30 22:56:39.319: INFO: PersistentVolume pvc-10eb92ba-2ec6-4aab-b7c5-5415956b1212 found and phase=Bound (1m12.15100038s) Jan 30 22:56:44.461: INFO: PersistentVolume pvc-10eb92ba-2ec6-4aab-b7c5-5415956b1212 found and phase=Bound (1m17.293579682s) Jan 30 22:56:49.605: INFO: PersistentVolume pvc-10eb92ba-2ec6-4aab-b7c5-5415956b1212 found and phase=Bound (1m22.437855699s) Jan 30 22:56:54.748: INFO: PersistentVolume pvc-10eb92ba-2ec6-4aab-b7c5-5415956b1212 found and phase=Bound (1m27.58054656s) Jan 30 22:56:59.890: INFO: PersistentVolume pvc-10eb92ba-2ec6-4aab-b7c5-5415956b1212 found and phase=Bound (1m32.722781775s) Jan 30 22:57:05.036: INFO: PersistentVolume pvc-10eb92ba-2ec6-4aab-b7c5-5415956b1212 found and phase=Bound (1m37.867897233s) Jan 30 22:57:10.182: INFO: PersistentVolume pvc-10eb92ba-2ec6-4aab-b7c5-5415956b1212 found and phase=Bound (1m43.014847882s) Jan 30 22:57:15.325: INFO: PersistentVolume pvc-10eb92ba-2ec6-4aab-b7c5-5415956b1212 found and phase=Bound (1m48.157175426s) Jan 30 22:57:20.470: INFO: PersistentVolume pvc-10eb92ba-2ec6-4aab-b7c5-5415956b1212 found and phase=Bound (1m53.30258305s) Jan 30 22:57:25.613: INFO: PersistentVolume pvc-10eb92ba-2ec6-4aab-b7c5-5415956b1212 found and phase=Bound (1m58.445471418s) Jan 30 22:57:30.781: INFO: PersistentVolume pvc-10eb92ba-2ec6-4aab-b7c5-5415956b1212 found and phase=Bound (2m3.613116528s) Jan 30 22:57:35.950: INFO: PersistentVolume pvc-10eb92ba-2ec6-4aab-b7c5-5415956b1212 found and phase=Bound (2m8.781905431s) Jan 30 22:57:41.093: INFO: PersistentVolume pvc-10eb92ba-2ec6-4aab-b7c5-5415956b1212 found and phase=Bound (2m13.925548247s) Jan 30 22:57:46.239: INFO: PersistentVolume pvc-10eb92ba-2ec6-4aab-b7c5-5415956b1212 found and phase=Bound (2m19.071299896s) Jan 30 22:57:51.386: INFO: PersistentVolume pvc-10eb92ba-2ec6-4aab-b7c5-5415956b1212 found and phase=Bound (2m24.217896531s) Jan 30 22:57:56.528: INFO: PersistentVolume pvc-10eb92ba-2ec6-4aab-b7c5-5415956b1212 found and phase=Bound (2m29.360676528s) Jan 30 22:58:01.672: INFO: PersistentVolume pvc-10eb92ba-2ec6-4aab-b7c5-5415956b1212 found and phase=Bound (2m34.503876211s) Jan 30 22:58:06.814: INFO: PersistentVolume pvc-10eb92ba-2ec6-4aab-b7c5-5415956b1212 found and phase=Bound (2m39.646647208s) Jan 30 22:58:11.957: INFO: PersistentVolume pvc-10eb92ba-2ec6-4aab-b7c5-5415956b1212 found and phase=Bound (2m44.789553766s) Jan 30 22:58:17.100: INFO: PersistentVolume pvc-10eb92ba-2ec6-4aab-b7c5-5415956b1212 found and phase=Bound (2m49.932557699s) Jan 30 22:58:22.243: INFO: PersistentVolume pvc-10eb92ba-2ec6-4aab-b7c5-5415956b1212 found and phase=Bound (2m55.074889672s) �[1mSTEP�[0m: Deleting sc Jan 30 22:58:27.388: FAIL: while cleanup resource Unexpected error: <errors.aggregate | len:1, cap:1>: [ [ { msg: "persistent Volume pvc-10eb92ba-2ec6-4aab-b7c5-5415956b1212 not deleted by dynamic provisioner: PersistentVolume pvc-10eb92ba-2ec6-4aab-b7c5-5415956b1212 still exists within 3m0s", err: { s: "PersistentVolume pvc-10eb92ba-2ec6-4aab-b7c5-5415956b1212 still exists within 3m0s", }, }, ], ] persistent Volume pvc-10eb92ba-2ec6-4aab-b7c5-5415956b1212 not deleted by dynamic provisioner: PersistentVolume pvc-10eb92ba-2ec6-4aab-b7c5-5415956b1212 still exists within 3m0s occurred Full Stack Trace k8s.io/kubernetes/test/e2e/storage/testsuites.(*fsGroupChangePolicyTestSuite).DefineTests.func2() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/fsgroupchangepolicy.go:132 +0x24c panic({0x6bb1ac0, 0xc004144600}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:63 +0x7d panic({0x623d460, 0x78c75a0}) /usr/local/go/src/runtime/panic.go:884 +0x212 k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail({0xc0013a4280, 0x13b}, {0xc003480d30?, 0x7047513?, 0xc003480d50?}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:67 +0x197 k8s.io/kubernetes/test/e2e/framework.Fail({0xc0013a4140, 0x126}, {0xc0056d89d0?, 0xc0013a4140?, 0xc0040ec820?}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/log.go:63 +0x145 k8s.io/kubernetes/vendor/github.com/onsi/gomega/internal/assertion.(*Assertion).match(0xc003480e98, {0x78f18f8, 0xa516880}, 0x0, {0x0, 0x0, 0x0}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/gomega/internal/assertion/assertion.go:79 +0x1bd k8s.io/kubernetes/vendor/github.com/onsi/gomega/internal/assertion.(*Assertion).NotTo(0xc003480e98, {0x78f18f8, 0xa516880}, {0x0, 0x0, 0x0}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/gomega/internal/assertion/assertion.go:48 +0x92 k8s.io/kubernetes/test/e2e/framework.ExpectNoErrorWithOffset(0x7938928?, {0x78cb4e0?, 0xc005702400?}, {0x0?, 0x2?, 0x0?}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/expect.go:46 +0x9d k8s.io/kubernetes/test/e2e/framework.ExpectNoError(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/expect.go:40 k8s.io/kubernetes/test/e2e/storage/testsuites.createPodAndVerifyContentGid(0xc00235c9a0, 0xc0057f9068, 0x0, {0xc0056d9298, 0x4}, {0xc0056d929c, 0x4}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/fsgroupchangepolicy.go:262 +0x151 k8s.io/kubernetes/test/e2e/storage/testsuites.(*fsGroupChangePolicyTestSuite).DefineTests.func3() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/fsgroupchangepolicy.go:251 +0x556 k8s.io/kubernetes/test/e2e.RunE2ETests(0x24eec17?) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:133 +0x6b7 k8s.io/kubernetes/test/e2e.TestE2E(0x24602d9?) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc0006bd040, 0x72ecb90) /usr/local/go/src/testing/testing.go:1446 +0x10b created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1493 +0x35f [AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 �[1mSTEP�[0m: Collecting events from namespace "fsgroupchangepolicy-3075". �[1mSTEP�[0m: Found 32 events. Jan 30 22:58:27.532: INFO: At 2023-01-30 22:48:54 +0000 UTC - event for ebs.csi.aws.comsnjpg: {persistentvolume-controller } WaitForFirstConsumer: waiting for first consumer to be created before binding Jan 30 22:58:27.532: INFO: At 2023-01-30 22:48:54 +0000 UTC - event for ebs.csi.aws.comsnjpg: {persistentvolume-controller } ExternalProvisioning: waiting for a volume to be created, either by external provisioner "ebs.csi.aws.com" or manually created by system administrator Jan 30 22:58:27.532: INFO: At 2023-01-30 22:48:54 +0000 UTC - event for ebs.csi.aws.comsnjpg: {ebs.csi.aws.com_ip-172-20-63-44_fec1cc0e-6bda-40bc-98e3-8a534e13e54e } Provisioning: External provisioner is provisioning volume for claim "fsgroupchangepolicy-3075/ebs.csi.aws.comsnjpg" Jan 30 22:58:27.532: INFO: At 2023-01-30 22:48:58 +0000 UTC - event for ebs.csi.aws.comsnjpg: {ebs.csi.aws.com_ip-172-20-63-44_fec1cc0e-6bda-40bc-98e3-8a534e13e54e } ProvisioningSucceeded: Successfully provisioned volume pvc-10eb92ba-2ec6-4aab-b7c5-5415956b1212 Jan 30 22:58:27.532: INFO: At 2023-01-30 22:48:58 +0000 UTC - event for pod-834b9c41-ef01-4e0b-b9fc-0f86ce4cb398: {default-scheduler } Scheduled: Successfully assigned fsgroupchangepolicy-3075/pod-834b9c41-ef01-4e0b-b9fc-0f86ce4cb398 to ip-172-20-63-7.sa-east-1.compute.internal Jan 30 22:58:27.532: INFO: At 2023-01-30 22:49:01 +0000 UTC - event for pod-834b9c41-ef01-4e0b-b9fc-0f86ce4cb398: {attachdetach-controller } SuccessfulAttachVolume: AttachVolume.Attach succeeded for volume "pvc-10eb92ba-2ec6-4aab-b7c5-5415956b1212" Jan 30 22:58:27.532: INFO: At 2023-01-30 22:49:04 +0000 UTC - event for pod-834b9c41-ef01-4e0b-b9fc-0f86ce4cb398: {kubelet ip-172-20-63-7.sa-east-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "27051d5edf3db011f94f9fb3a6f576f97b04966bf02444125a6de61032ce54fd" network for pod "pod-834b9c41-ef01-4e0b-b9fc-0f86ce4cb398": networkPlugin cni failed to set up pod "pod-834b9c41-ef01-4e0b-b9fc-0f86ce4cb398_fsgroupchangepolicy-3075" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 30 22:58:27.532: INFO: At 2023-01-30 22:49:05 +0000 UTC - event for pod-834b9c41-ef01-4e0b-b9fc-0f86ce4cb398: {kubelet ip-172-20-63-7.sa-east-1.compute.internal} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 22:58:27.532: INFO: At 2023-01-30 22:49:08 +0000 UTC - event for pod-834b9c41-ef01-4e0b-b9fc-0f86ce4cb398: {kubelet ip-172-20-63-7.sa-east-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "66f19b97f49afd04b295f4a580fb4858ff1f51ef8d6c8e399060978d2d16fb1c" network for pod "pod-834b9c41-ef01-4e0b-b9fc-0f86ce4cb398": networkPlugin cni failed to set up pod "pod-834b9c41-ef01-4e0b-b9fc-0f86ce4cb398_fsgroupchangepolicy-3075" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 30 22:58:27.532: INFO: At 2023-01-30 22:49:13 +0000 UTC - event for pod-834b9c41-ef01-4e0b-b9fc-0f86ce4cb398: {kubelet ip-172-20-63-7.sa-east-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "a8bd25ec137e380b22dcffb125ce7a08120d2ac1b55441dc9499c5574c6aa324" network for pod "pod-834b9c41-ef01-4e0b-b9fc-0f86ce4cb398": networkPlugin cni failed to set up pod "pod-834b9c41-ef01-4e0b-b9fc-0f86ce4cb398_fsgroupchangepolicy-3075" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 30 22:58:27.532: INFO: At 2023-01-30 22:49:15 +0000 UTC - event for pod-834b9c41-ef01-4e0b-b9fc-0f86ce4cb398: {kubelet ip-172-20-63-7.sa-east-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "f174bf49dad3f5632013729aeff1c663fc9870371f247e62e95634fb3be8f17d" network for pod "pod-834b9c41-ef01-4e0b-b9fc-0f86ce4cb398": networkPlugin cni failed to set up pod "pod-834b9c41-ef01-4e0b-b9fc-0f86ce4cb398_fsgroupchangepolicy-3075" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 30 22:58:27.532: INFO: At 2023-01-30 22:49:22 +0000 UTC - event for pod-834b9c41-ef01-4e0b-b9fc-0f86ce4cb398: {kubelet ip-172-20-63-7.sa-east-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "032e622bd7decc28d769879976e04b116ad2b7a8ddd8bae29084d30b6b864718" network for pod "pod-834b9c41-ef01-4e0b-b9fc-0f86ce4cb398": networkPlugin cni failed to set up pod "pod-834b9c41-ef01-4e0b-b9fc-0f86ce4cb398_fsgroupchangepolicy-3075" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 30 22:58:27.532: INFO: At 2023-01-30 22:49:26 +0000 UTC - event for pod-834b9c41-ef01-4e0b-b9fc-0f86ce4cb398: {kubelet ip-172-20-63-7.sa-east-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "ad8408e9161ba31e1dc9d85ec103d72d4677726bd940a48947f20567e3934606" network for pod "pod-834b9c41-ef01-4e0b-b9fc-0f86ce4cb398": networkPlugin cni failed to set up pod "pod-834b9c41-ef01-4e0b-b9fc-0f86ce4cb398_fsgroupchangepolicy-3075" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 30 22:58:27.532: INFO: At 2023-01-30 22:49:31 +0000 UTC - event for pod-834b9c41-ef01-4e0b-b9fc-0f86ce4cb398: {kubelet ip-172-20-63-7.sa-east-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "ed3d91e586c85b8a9343cf819e26687018e239d76b8224f5690e8548f73931d0" network for pod "pod-834b9c41-ef01-4e0b-b9fc-0f86ce4cb398": networkPlugin cni failed to set up pod "pod-834b9c41-ef01-4e0b-b9fc-0f86ce4cb398_fsgroupchangepolicy-3075" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 30 22:58:27.532: INFO: At 2023-01-30 22:49:38 +0000 UTC - event for pod-834b9c41-ef01-4e0b-b9fc-0f86ce4cb398: {kubelet ip-172-20-63-7.sa-east-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "07bd3dd63a49058b4729e92d6ddd9eb5707cc5766d28dbd5fb6673bd7b359fe5" network for pod "pod-834b9c41-ef01-4e0b-b9fc-0f86ce4cb398": networkPlugin cni failed to set up pod "pod-834b9c41-ef01-4e0b-b9fc-0f86ce4cb398_fsgroupchangepolicy-3075" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 30 22:58:27.532: INFO: At 2023-01-30 22:49:46 +0000 UTC - event for pod-834b9c41-ef01-4e0b-b9fc-0f86ce4cb398: {kubelet ip-172-20-63-7.sa-east-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "fe0a85b671916bfcd8e1004277c8505536e362eaf0b76fe3ae12b0052c2c1a00" network for pod "pod-834b9c41-ef01-4e0b-b9fc-0f86ce4cb398": networkPlugin cni failed to set up pod "pod-834b9c41-ef01-4e0b-b9fc-0f86ce4cb398_fsgroupchangepolicy-3075" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 30 22:58:27.532: INFO: At 2023-01-30 22:49:56 +0000 UTC - event for pod-834b9c41-ef01-4e0b-b9fc-0f86ce4cb398: {kubelet ip-172-20-63-7.sa-east-1.compute.internal} FailedCreatePodSandBox: (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "5c64c1e8ab40987ff6a1f2409e437136f1c50ac641ae525f642fa9019c6d5ab8" network for pod "pod-834b9c41-ef01-4e0b-b9fc-0f86ce4cb398": networkPlugin cni failed to set up pod "pod-834b9c41-ef01-4e0b-b9fc-0f86ce4cb398_fsgroupchangepolicy-3075" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 30 22:58:27.532: INFO: At 2023-01-30 22:50:05 +0000 UTC - event for pod-834b9c41-ef01-4e0b-b9fc-0f86ce4cb398: {kubelet ip-172-20-63-7.sa-east-1.compute.internal} Pulled: Container image "k8s.gcr.io/e2e-test-images/busybox:1.29-2" already present on machine Jan 30 22:58:27.532: INFO: At 2023-01-30 22:50:05 +0000 UTC - event for pod-834b9c41-ef01-4e0b-b9fc-0f86ce4cb398: {kubelet ip-172-20-63-7.sa-east-1.compute.internal} Created: Created container write-pod Jan 30 22:58:27.532: INFO: At 2023-01-30 22:50:05 +0000 UTC - event for pod-834b9c41-ef01-4e0b-b9fc-0f86ce4cb398: {kubelet ip-172-20-63-7.sa-east-1.compute.internal} Started: Started container write-pod Jan 30 22:58:27.532: INFO: At 2023-01-30 22:50:26 +0000 UTC - event for pod-9361d956-3a9e-45fa-92dc-ac8884faccaa: {default-scheduler } Scheduled: Successfully assigned fsgroupchangepolicy-3075/pod-9361d956-3a9e-45fa-92dc-ac8884faccaa to ip-172-20-63-7.sa-east-1.compute.internal Jan 30 22:58:27.532: INFO: At 2023-01-30 22:50:35 +0000 UTC - event for pod-9361d956-3a9e-45fa-92dc-ac8884faccaa: {kubelet ip-172-20-63-7.sa-east-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "2927693d00fb37f03917f20c7b8340ea2d5aae42af8a47ec96da80bcbbab2e3f" network for pod "pod-9361d956-3a9e-45fa-92dc-ac8884faccaa": networkPlugin cni failed to set up pod "pod-9361d956-3a9e-45fa-92dc-ac8884faccaa_fsgroupchangepolicy-3075" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 30 22:58:27.532: INFO: At 2023-01-30 22:50:36 +0000 UTC - event for pod-9361d956-3a9e-45fa-92dc-ac8884faccaa: {kubelet ip-172-20-63-7.sa-east-1.compute.internal} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 22:58:27.532: INFO: At 2023-01-30 22:50:37 +0000 UTC - event for pod-9361d956-3a9e-45fa-92dc-ac8884faccaa: {kubelet ip-172-20-63-7.sa-east-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "e6db461c92b1f434e013794caab4c786e0b7b1b1174d950ec6058cd2ffa76867" network for pod "pod-9361d956-3a9e-45fa-92dc-ac8884faccaa": networkPlugin cni failed to set up pod "pod-9361d956-3a9e-45fa-92dc-ac8884faccaa_fsgroupchangepolicy-3075" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 30 22:58:27.532: INFO: At 2023-01-30 22:50:40 +0000 UTC - event for pod-9361d956-3a9e-45fa-92dc-ac8884faccaa: {kubelet ip-172-20-63-7.sa-east-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "a6f6ebd6e330b1d6a4beabca9efa91d9d86c89f57b6fe40478837fbc781e8eb3" network for pod "pod-9361d956-3a9e-45fa-92dc-ac8884faccaa": networkPlugin cni failed to set up pod "pod-9361d956-3a9e-45fa-92dc-ac8884faccaa_fsgroupchangepolicy-3075" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 30 22:58:27.532: INFO: At 2023-01-30 22:50:44 +0000 UTC - event for pod-9361d956-3a9e-45fa-92dc-ac8884faccaa: {kubelet ip-172-20-63-7.sa-east-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "acaf8023b49e07d89f5ad7a00d07daa2977098f3f96267e6ce8f3c8b83ccb710" network for pod "pod-9361d956-3a9e-45fa-92dc-ac8884faccaa": networkPlugin cni failed to set up pod "pod-9361d956-3a9e-45fa-92dc-ac8884faccaa_fsgroupchangepolicy-3075" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 30 22:58:27.532: INFO: At 2023-01-30 22:50:48 +0000 UTC - event for pod-9361d956-3a9e-45fa-92dc-ac8884faccaa: {kubelet ip-172-20-63-7.sa-east-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "2cc73903b382b52b2be42f35228ae155b34127dd10b1fd2453bc474e9da2d0f7" network for pod "pod-9361d956-3a9e-45fa-92dc-ac8884faccaa": networkPlugin cni failed to set up pod "pod-9361d956-3a9e-45fa-92dc-ac8884faccaa_fsgroupchangepolicy-3075" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 30 22:58:27.532: INFO: At 2023-01-30 22:50:50 +0000 UTC - event for pod-9361d956-3a9e-45fa-92dc-ac8884faccaa: {kubelet ip-172-20-63-7.sa-east-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "0f8184571f54314e8007c42d1324afe1593341b0fdfd38c34b8cab152b47477c" network for pod "pod-9361d956-3a9e-45fa-92dc-ac8884faccaa": networkPlugin cni failed to set up pod "pod-9361d956-3a9e-45fa-92dc-ac8884faccaa_fsgroupchangepolicy-3075" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 30 22:58:27.532: INFO: At 2023-01-30 22:50:53 +0000 UTC - event for pod-9361d956-3a9e-45fa-92dc-ac8884faccaa: {kubelet ip-172-20-63-7.sa-east-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "92c0eba047db395136603c1d018c379736d47967a7732159959fd915880160f4" network for pod "pod-9361d956-3a9e-45fa-92dc-ac8884faccaa": networkPlugin cni failed to set up pod "pod-9361d956-3a9e-45fa-92dc-ac8884faccaa_fsgroupchangepolicy-3075" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 30 22:58:27.532: INFO: At 2023-01-30 22:50:56 +0000 UTC - event for pod-9361d956-3a9e-45fa-92dc-ac8884faccaa: {kubelet ip-172-20-63-7.sa-east-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "2d53bd9fea755743afafaa4e7e4b7981b7bcc492486ce44084fb0cc952cc6bb2" network for pod "pod-9361d956-3a9e-45fa-92dc-ac8884faccaa": networkPlugin cni failed to set up pod "pod-9361d956-3a9e-45fa-92dc-ac8884faccaa_fsgroupchangepolicy-3075" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 30 22:58:27.532: INFO: At 2023-01-30 22:50:58 +0000 UTC - event for pod-9361d956-3a9e-45fa-92dc-ac8884faccaa: {kubelet ip-172-20-63-7.sa-east-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "5fac9be6dac9b07516385257aade8267c9a0402fc81473dd23958ef66263edd1" network for pod "pod-9361d956-3a9e-45fa-92dc-ac8884faccaa": networkPlugin cni failed to set up pod "pod-9361d956-3a9e-45fa-92dc-ac8884faccaa_fsgroupchangepolicy-3075" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 30 22:58:27.532: INFO: At 2023-01-30 22:51:00 +0000 UTC - event for pod-9361d956-3a9e-45fa-92dc-ac8884faccaa: {kubelet ip-172-20-63-7.sa-east-1.compute.internal} FailedCreatePodSandBox: (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "df44adc30930e61afc34776a07884025a053881e45c298acdf270ee73b88e3cf" network for pod "pod-9361d956-3a9e-45fa-92dc-ac8884faccaa": networkPlugin cni failed to set up pod "pod-9361d956-3a9e-45fa-92dc-ac8884faccaa_fsgroupchangepolicy-3075" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 30 22:58:27.675: INFO: POD NODE PHASE GRACE CONDITIONS Jan 30 22:58:27.675: INFO: pod-9361d956-3a9e-45fa-92dc-ac8884faccaa ip-172-20-63-7.sa-east-1.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 22:50:26 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 22:56:11 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 22:56:11 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 22:50:26 +0000 UTC }] Jan 30 22:58:27.676: INFO: Jan 30 22:58:27.967: INFO: Logging node info for node ip-172-20-37-244.sa-east-1.compute.internal Jan 30 22:58:28.116: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-37-244.sa-east-1.compute.internal 1be0c21f-5cd5-49c3-937b-dcb7d30e890a 17005 0 2023-01-30 22:39:08 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:sa-east-1 failure-domain.beta.kubernetes.io/zone:sa-east-1a kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-37-244.sa-east-1.compute.internal kubernetes.io/os:linux kubernetes.io/role:node node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:sa-east-1a topology.hostpath.csi/node:ip-172-20-37-244.sa-east-1.compute.internal topology.kubernetes.io/region:sa-east-1 topology.kubernetes.io/zone:sa-east-1a] map[csi.volume.kubernetes.io/nodeid:{"ebs.csi.aws.com":"i-02de6750f6f07da4c"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kops-controller Update v1 2023-01-30 22:39:08 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/node":{}}}} } {kube-controller-manager Update v1 2023-01-30 22:39:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.4.0/24\"":{}}}} } {kubelet Update v1 2023-01-30 22:39:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-30 22:56:16 +0000 UTC FieldsV1 {"f:status":{"f:volumesAttached":{}}} status} {kubelet Update v1 2023-01-30 22:56:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.4.0/24,DoNotUseExternalID:,ProviderID:aws:///sa-east-1a/i-02de6750f6f07da4c,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{49753808896 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4051644416 0} {<nil>} 3956684Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{44778427933 0} {<nil>} 44778427933 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3946786816 0} {<nil>} 3854284Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-30 22:58:25 +0000 UTC,LastTransitionTime:2023-01-30 22:39:08 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-30 22:58:25 +0000 UTC,LastTransitionTime:2023-01-30 22:39:08 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-30 22:58:25 +0000 UTC,LastTransitionTime:2023-01-30 22:39:08 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-30 22:58:25 +0000 UTC,LastTransitionTime:2023-01-30 22:39:28 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.37.244,},NodeAddress{Type:ExternalIP,Address:54.232.162.137,},NodeAddress{Type:Hostname,Address:ip-172-20-37-244.sa-east-1.compute.internal,},NodeAddress{Type:InternalDNS,Address:ip-172-20-37-244.sa-east-1.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-54-232-162-137.sa-east-1.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec2350fb0335a8c0068ce4bddeab7362,SystemUUID:ec2350fb-0335-a8c0-068c-e4bddeab7362,BootID:80522224-50f0-4d12-bc36-a8ad10d0e9d2,KernelVersion:5.15.0-1028-aws,OSImage:Ubuntu 22.04.1 LTS,ContainerRuntimeVersion:docker://20.10.17,KubeletVersion:v1.23.16,KubeProxyVersion:v1.23.16,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5 quay.io/cilium/cilium:v1.12.5],SizeBytes:456569528,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:11e6a66017ba4e4b938c1612b7a54a3befcefd354796c04e1dba76873a13518e k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.5],SizeBytes:253371792,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:20f25f275d46aa728f7615a1ccc19c78b2ed89435bf943a44b339f70f45508e6 k8s.gcr.io/e2e-test-images/httpd:2.4.39-2],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e k8s.gcr.io/e2e-test-images/agnhost:2.39],SizeBytes:126872991,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.23.16 registry.k8s.io/kube-proxy-amd64:v1.23.16],SizeBytes:110832791,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:f0c5de192d832e7c1daa6580d4a62e8fa6fc8eabc0917ae4cb7ed4d15e95b59e registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.1],SizeBytes:94471559,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:4e74c0492bceddc598de1c90cc5bc14dcda94cb49fa9c5bad9d117c4834b5e08 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.1],SizeBytes:56379269,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:36c31f7e1f433c9634d24f876353e8646246d81a03c4e351202c2644daff1620 k8s.gcr.io/sig-storage/csi-resizer:v1.2.0],SizeBytes:53956308,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-snapshotter@sha256:ed98431376c9e944e19a465fe8ea944806714dd95416a0821096c78d66b579bd k8s.gcr.io/sig-storage/csi-snapshotter:v4.1.1],SizeBytes:53531990,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:60ab9b3e6a030d3038c87c0d6bca2930f58d1d72823e6a4af09767dc83b696a2 k8s.gcr.io/sig-storage/csi-attacher:v3.2.1],SizeBytes:53499570,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:50249337,},ContainerImage{Names:[k8s.gcr.io/sig-storage/hostpathplugin@sha256:232fe80174d60d520d36043103853a1d7ab4b7f3782cf43e45034f04ccda58ce k8s.gcr.io/sig-storage/hostpathplugin:v1.7.1],SizeBytes:29977921,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:19582112,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:2dee3fe5fe861bb66c3a4ac51114f3447a4cd35870e0f2e2b558c7a400d89589 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0],SizeBytes:18699436,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:17611980,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:529be2c9770add0cdd0c989115222ea9fc1be430c11095eb9f6dafcf98a36e2b k8s.gcr.io/sig-storage/livenessprobe:v2.4.0],SizeBytes:17175488,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:1154361,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6 registry.k8s.io/pause:3.6],SizeBytes:682696,},},VolumesInUse:[kubernetes.io/csi/ebs.csi.aws.com^vol-046084c5584ee86ce kubernetes.io/csi/ebs.csi.aws.com^vol-05300c60d93379ef6],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/ebs.csi.aws.com^vol-046084c5584ee86ce,DevicePath:,},},Config:nil,},} Jan 30 22:58:28.117: INFO: Logging kubelet events for node ip-172-20-37-244.sa-east-1.compute.internal Jan 30 22:58:28.263: INFO: Logging pods the kubelet thinks is on node ip-172-20-37-244.sa-east-1.compute.internal Jan 30 22:58:28.421: INFO: pod-1 started at <nil> (0+0 container statuses recorded) Jan 30 22:58:28.421: INFO: dns-test-8a9eb548-4fa3-4555-ab5d-f05cb5b20fb9 started at <nil> (0+0 container statuses recorded) Jan 30 22:58:28.421: INFO: coredns-867df8f45c-q48mf started at 2023-01-30 22:39:48 +0000 UTC (0+1 container statuses recorded) Jan 30 22:58:28.421: INFO: Container coredns ready: true, restart count 0 Jan 30 22:58:28.421: INFO: ss-0 started at 2023-01-30 22:58:23 +0000 UTC (0+1 container statuses recorded) Jan 30 22:58:28.421: INFO: Container webserver ready: false, restart count 0 Jan 30 22:58:28.421: INFO: pod-4ca58db2-3b37-41db-bc9b-78945a25c0db started at <nil> (0+0 container statuses recorded) Jan 30 22:58:28.421: INFO: pod-2 started at <nil> (0+0 container statuses recorded) Jan 30 22:58:28.421: INFO: execpod6l554 started at 2023-01-30 22:58:17 +0000 UTC (0+1 container statuses recorded) Jan 30 22:58:28.421: INFO: Container agnhost-container ready: true, restart count 0 Jan 30 22:58:28.421: INFO: netserver-0 started at <nil> (0+0 container statuses recorded) Jan 30 22:58:28.421: INFO: ebs-csi-node-wwnfq started at 2023-01-30 22:39:09 +0000 UTC (0+3 container statuses recorded) Jan 30 22:58:28.421: INFO: Container ebs-plugin ready: true, restart count 0 Jan 30 22:58:28.421: INFO: Container liveness-probe ready: true, restart count 0 Jan 30 22:58:28.421: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 30 22:58:28.421: INFO: pod-0 started at <nil> (0+0 container statuses recorded) Jan 30 22:58:28.421: INFO: externalname-service-fjf98 started at 2023-01-30 22:58:08 +0000 UTC (0+1 container statuses recorded) Jan 30 22:58:28.421: INFO: Container externalname-service ready: true, restart count 0 Jan 30 22:58:28.421: INFO: netserver-0 started at 2023-01-30 22:57:41 +0000 UTC (0+1 container statuses recorded) Jan 30 22:58:28.421: INFO: Container webserver ready: true, restart count 0 Jan 30 22:58:28.421: INFO: startup-1d596fd1-527d-4d5b-8875-60f0e988a7b0 started at 2023-01-30 22:58:17 +0000 UTC (0+1 container statuses recorded) Jan 30 22:58:28.421: INFO: Container busybox ready: false, restart count 0 Jan 30 22:58:28.421: INFO: test-container-pod started at 2023-01-30 22:58:08 +0000 UTC (0+1 container statuses recorded) Jan 30 22:58:28.421: INFO: Container webserver ready: true, restart count 0 Jan 30 22:58:28.421: INFO: cilium-2kmmh started at 2023-01-30 22:39:09 +0000 UTC (1+1 container statuses recorded) Jan 30 22:58:28.421: INFO: Init container clean-cilium-state ready: true, restart count 0 Jan 30 22:58:28.421: INFO: Container cilium-agent ready: true, restart count 0 Jan 30 22:58:28.421: INFO: netserver-0 started at 2023-01-30 22:57:49 +0000 UTC (0+1 container statuses recorded) Jan 30 22:58:28.421: INFO: Container webserver ready: true, restart count 0 Jan 30 22:58:28.421: INFO: ss-2 started at 2023-01-30 22:57:31 +0000 UTC (0+1 container statuses recorded) Jan 30 22:58:28.421: INFO: Container webserver ready: true, restart count 0 Jan 30 22:58:28.421: INFO: hostexec-ip-172-20-37-244.sa-east-1.compute.internal-925h9 started at <nil> (0+0 container statuses recorded) Jan 30 22:58:28.421: INFO: downwardapi-volume-2b38c295-c9c8-4911-af78-0a4bb3aad9af started at 2023-01-30 22:58:22 +0000 UTC (0+1 container statuses recorded) Jan 30 22:58:28.421: INFO: Container client-container ready: false, restart count 0 Jan 30 22:58:28.421: INFO: test-deployment-854fdc678-jqs9j started at 2023-01-30 22:58:12 +0000 UTC (0+1 container statuses recorded) Jan 30 22:58:28.421: INFO: Container test-deployment ready: true, restart count 0 Jan 30 22:58:28.881: INFO: Latency metrics for node ip-172-20-37-244.sa-east-1.compute.internal Jan 30 22:58:28.881: INFO: Logging node info for node ip-172-20-46-143.sa-east-1.compute.internal Jan 30 22:58:29.024: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-46-143.sa-east-1.compute.internal 4ac0f2fd-a06b-4650-9c4a-c2964727bf42 17024 0 2023-01-30 22:39:07 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:sa-east-1 failure-domain.beta.kubernetes.io/zone:sa-east-1a kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-46-143.sa-east-1.compute.internal kubernetes.io/os:linux kubernetes.io/role:node node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:sa-east-1a topology.hostpath.csi/node:ip-172-20-46-143.sa-east-1.compute.internal topology.kubernetes.io/region:sa-east-1 topology.kubernetes.io/zone:sa-east-1a] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-2834":"csi-mock-csi-mock-volumes-2834","csi-mock-csi-mock-volumes-9903":"csi-mock-csi-mock-volumes-9903","ebs.csi.aws.com":"i-0549a01609c77b117"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kops-controller Update v1 2023-01-30 22:39:07 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/node":{}}}} } {kube-controller-manager Update v1 2023-01-30 22:39:07 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.3.0/24\"":{}}}} } {kubelet Update v1 2023-01-30 22:39:07 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kubelet Update v1 2023-01-30 22:58:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.3.0/24,DoNotUseExternalID:,ProviderID:aws:///sa-east-1a/i-0549a01609c77b117,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{49753808896 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4051652608 0} {<nil>} 3956692Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{44778427933 0} {<nil>} 44778427933 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3946795008 0} {<nil>} 3854292Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-30 22:58:25 +0000 UTC,LastTransitionTime:2023-01-30 22:39:07 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-30 22:58:25 +0000 UTC,LastTransitionTime:2023-01-30 22:39:07 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-30 22:58:25 +0000 UTC,LastTransitionTime:2023-01-30 22:39:07 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-30 22:58:25 +0000 UTC,LastTransitionTime:2023-01-30 22:39:28 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.46.143,},NodeAddress{Type:ExternalIP,Address:18.230.23.25,},NodeAddress{Type:Hostname,Address:ip-172-20-46-143.sa-east-1.compute.internal,},NodeAddress{Type:InternalDNS,Address:ip-172-20-46-143.sa-east-1.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-18-230-23-25.sa-east-1.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec25bac6007d23dab6609e76a6663500,SystemUUID:ec25bac6-007d-23da-b660-9e76a6663500,BootID:cd72b157-4d78-4df9-997f-bab559376690,KernelVersion:5.15.0-1028-aws,OSImage:Ubuntu 22.04.1 LTS,ContainerRuntimeVersion:docker://20.10.17,KubeletVersion:v1.23.16,KubeProxyVersion:v1.23.16,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5 quay.io/cilium/cilium:v1.12.5],SizeBytes:456569528,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:20f25f275d46aa728f7615a1ccc19c78b2ed89435bf943a44b339f70f45508e6 k8s.gcr.io/e2e-test-images/httpd:2.4.39-2],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e k8s.gcr.io/e2e-test-images/agnhost:2.39],SizeBytes:126872991,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.23.16 registry.k8s.io/kube-proxy-amd64:v1.23.16],SizeBytes:110832791,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:f0c5de192d832e7c1daa6580d4a62e8fa6fc8eabc0917ae4cb7ed4d15e95b59e registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.1],SizeBytes:94471559,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0],SizeBytes:56382329,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:4e74c0492bceddc598de1c90cc5bc14dcda94cb49fa9c5bad9d117c4834b5e08 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.1],SizeBytes:56379269,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:36c31f7e1f433c9634d24f876353e8646246d81a03c4e351202c2644daff1620 k8s.gcr.io/sig-storage/csi-resizer:v1.2.0],SizeBytes:53956308,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-snapshotter@sha256:ed98431376c9e944e19a465fe8ea944806714dd95416a0821096c78d66b579bd k8s.gcr.io/sig-storage/csi-snapshotter:v4.1.1],SizeBytes:53531990,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:c5be65d6679efabb969d9b019300d187437ae876f992c40911fd2892bbef3b36 k8s.gcr.io/sig-storage/csi-attacher:v3.2.0],SizeBytes:53500826,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:60ab9b3e6a030d3038c87c0d6bca2930f58d1d72823e6a4af09767dc83b696a2 k8s.gcr.io/sig-storage/csi-attacher:v3.2.1],SizeBytes:53499570,},ContainerImage{Names:[k8s.gcr.io/sig-storage/hostpathplugin@sha256:232fe80174d60d520d36043103853a1d7ab4b7f3782cf43e45034f04ccda58ce k8s.gcr.io/sig-storage/hostpathplugin:v1.7.1],SizeBytes:29977921,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:19662887,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:19582112,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:2dee3fe5fe861bb66c3a4ac51114f3447a4cd35870e0f2e2b558c7a400d89589 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0],SizeBytes:18699436,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:17680993,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:17611980,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:529be2c9770add0cdd0c989115222ea9fc1be430c11095eb9f6dafcf98a36e2b k8s.gcr.io/sig-storage/livenessprobe:v2.4.0],SizeBytes:17175488,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 k8s.gcr.io/e2e-test-images/nginx:1.14-2],SizeBytes:16032814,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:1154361,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6 registry.k8s.io/pause:3.6],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 30 22:58:29.024: INFO: Logging kubelet events for node ip-172-20-46-143.sa-east-1.compute.internal Jan 30 22:58:29.169: INFO: Logging pods the kubelet thinks is on node ip-172-20-46-143.sa-east-1.compute.internal Jan 30 22:58:29.319: INFO: netserver-1 started at 2023-01-30 22:57:49 +0000 UTC (0+1 container statuses recorded) Jan 30 22:58:29.319: INFO: Container webserver ready: true, restart count 0 Jan 30 22:58:29.319: INFO: test-deployment-854fdc678-nlz4g started at 2023-01-30 22:58:19 +0000 UTC (0+1 container statuses recorded) Jan 30 22:58:29.319: INFO: Container test-deployment ready: true, restart count 0 Jan 30 22:58:29.319: INFO: sample-webhook-deployment-6c69dbd86b-cllb4 started at <nil> (0+0 container statuses recorded) Jan 30 22:58:29.319: INFO: csi-mockplugin-0 started at 2023-01-30 22:56:56 +0000 UTC (0+4 container statuses recorded) Jan 30 22:58:29.319: INFO: Container busybox ready: true, restart count 0 Jan 30 22:58:29.319: INFO: Container csi-provisioner ready: true, restart count 1 Jan 30 22:58:29.319: INFO: Container driver-registrar ready: true, restart count 0 Jan 30 22:58:29.319: INFO: Container mock ready: true, restart count 0 Jan 30 22:58:29.319: INFO: cilium-m624g started at 2023-01-30 22:39:08 +0000 UTC (1+1 container statuses recorded) Jan 30 22:58:29.319: INFO: Init container clean-cilium-state ready: true, restart count 0 Jan 30 22:58:29.319: INFO: Container cilium-agent ready: true, restart count 0 Jan 30 22:58:29.319: INFO: test-container-pod started at 2023-01-30 22:58:13 +0000 UTC (0+1 container statuses recorded) Jan 30 22:58:29.319: INFO: Container webserver ready: true, restart count 0 Jan 30 22:58:29.319: INFO: externalname-service-nd8w4 started at 2023-01-30 22:58:08 +0000 UTC (0+1 container statuses recorded) Jan 30 22:58:29.319: INFO: Container externalname-service ready: true, restart count 0 Jan 30 22:58:29.319: INFO: ebs-csi-node-qjvfh started at 2023-01-30 22:39:08 +0000 UTC (0+3 container statuses recorded) Jan 30 22:58:29.319: INFO: Container ebs-plugin ready: true, restart count 0 Jan 30 22:58:29.319: INFO: Container liveness-probe ready: true, restart count 0 Jan 30 22:58:29.319: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 30 22:58:29.319: INFO: csi-mockplugin-0 started at 2023-01-30 22:56:37 +0000 UTC (0+4 container statuses recorded) Jan 30 22:58:29.319: INFO: Container busybox ready: true, restart count 0 Jan 30 22:58:29.319: INFO: Container csi-provisioner ready: true, restart count 0 Jan 30 22:58:29.319: INFO: Container driver-registrar ready: true, restart count 0 Jan 30 22:58:29.319: INFO: Container mock ready: true, restart count 0 Jan 30 22:58:29.319: INFO: netserver-1 started at <nil> (0+0 container statuses recorded) Jan 30 22:58:29.319: INFO: pvc-volume-tester-ktw6w started at 2023-01-30 22:57:46 +0000 UTC (0+1 container statuses recorded) Jan 30 22:58:29.319: INFO: Container volume-tester ready: true, restart count 0 Jan 30 22:58:29.913: INFO: Latency metrics for node ip-172-20-46-143.sa-east-1.compute.internal Jan 30 22:58:29.913: INFO: Logging node info for node ip-172-20-56-33.sa-east-1.compute.internal Jan 30 22:58:30.056: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-56-33.sa-east-1.compute.internal 954986f9-8a0c-45d3-a91c-b10fd929b91d 16997 0 2023-01-30 22:39:06 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:sa-east-1 failure-domain.beta.kubernetes.io/zone:sa-east-1a kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-56-33.sa-east-1.compute.internal kubernetes.io/os:linux kubernetes.io/role:node node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:sa-east-1a topology.hostpath.csi/node:ip-172-20-56-33.sa-east-1.compute.internal topology.kubernetes.io/region:sa-east-1 topology.kubernetes.io/zone:sa-east-1a] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-ephemeral-2753":"ip-172-20-56-33.sa-east-1.compute.internal","ebs.csi.aws.com":"i-09e0b8ffb97d8ede2"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kops-controller Update v1 2023-01-30 22:39:06 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/node":{}}}} } {kubelet Update v1 2023-01-30 22:39:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-30 22:39:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.1.0/24\"":{}}}} } {kube-controller-manager Update v1 2023-01-30 22:57:51 +0000 UTC FieldsV1 {"f:status":{"f:volumesAttached":{}}} status} {kubelet Update v1 2023-01-30 22:58:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.1.0/24,DoNotUseExternalID:,ProviderID:aws:///sa-east-1a/i-09e0b8ffb97d8ede2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{49753808896 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4051644416 0} {<nil>} 3956684Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{44778427933 0} {<nil>} 44778427933 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3946786816 0} {<nil>} 3854284Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-30 22:58:20 +0000 UTC,LastTransitionTime:2023-01-30 22:39:06 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-30 22:58:20 +0000 UTC,LastTransitionTime:2023-01-30 22:39:06 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-30 22:58:20 +0000 UTC,LastTransitionTime:2023-01-30 22:39:06 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-30 22:58:20 +0000 UTC,LastTransitionTime:2023-01-30 22:39:26 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.56.33,},NodeAddress{Type:ExternalIP,Address:54.233.226.185,},NodeAddress{Type:Hostname,Address:ip-172-20-56-33.sa-east-1.compute.internal,},NodeAddress{Type:InternalDNS,Address:ip-172-20-56-33.sa-east-1.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-54-233-226-185.sa-east-1.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec2d7bffa4e33f064a7a3db7aac73580,SystemUUID:ec2d7bff-a4e3-3f06-4a7a-3db7aac73580,BootID:749c0ee0-ccbf-48a5-9702-baf2673813b3,KernelVersion:5.15.0-1028-aws,OSImage:Ubuntu 22.04.1 LTS,ContainerRuntimeVersion:docker://20.10.17,KubeletVersion:v1.23.16,KubeProxyVersion:v1.23.16,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5 quay.io/cilium/cilium:v1.12.5],SizeBytes:456569528,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e k8s.gcr.io/e2e-test-images/agnhost:2.39],SizeBytes:126872991,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.23.16 registry.k8s.io/kube-proxy-amd64:v1.23.16],SizeBytes:110832791,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:f0c5de192d832e7c1daa6580d4a62e8fa6fc8eabc0917ae4cb7ed4d15e95b59e registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.1],SizeBytes:94471559,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:4e74c0492bceddc598de1c90cc5bc14dcda94cb49fa9c5bad9d117c4834b5e08 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.1],SizeBytes:56379269,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:36c31f7e1f433c9634d24f876353e8646246d81a03c4e351202c2644daff1620 k8s.gcr.io/sig-storage/csi-resizer:v1.2.0],SizeBytes:53956308,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-snapshotter@sha256:ed98431376c9e944e19a465fe8ea944806714dd95416a0821096c78d66b579bd k8s.gcr.io/sig-storage/csi-snapshotter:v4.1.1],SizeBytes:53531990,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:60ab9b3e6a030d3038c87c0d6bca2930f58d1d72823e6a4af09767dc83b696a2 k8s.gcr.io/sig-storage/csi-attacher:v3.2.1],SizeBytes:53499570,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:50249337,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:68d396900aeaa072c1f27289485fdac29834045a6f3ffe369bf389d830ef572d registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.6],SizeBytes:47910609,},ContainerImage{Names:[k8s.gcr.io/sig-storage/hostpathplugin@sha256:232fe80174d60d520d36043103853a1d7ab4b7f3782cf43e45034f04ccda58ce k8s.gcr.io/sig-storage/hostpathplugin:v1.7.1],SizeBytes:29977921,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:19582112,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:2dee3fe5fe861bb66c3a4ac51114f3447a4cd35870e0f2e2b558c7a400d89589 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0],SizeBytes:18699436,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:17611980,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:529be2c9770add0cdd0c989115222ea9fc1be430c11095eb9f6dafcf98a36e2b k8s.gcr.io/sig-storage/livenessprobe:v2.4.0],SizeBytes:17175488,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 k8s.gcr.io/e2e-test-images/nginx:1.14-2],SizeBytes:16032814,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:1154361,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6 registry.k8s.io/pause:3.6],SizeBytes:682696,},},VolumesInUse:[kubernetes.io/csi/csi-hostpath-ephemeral-2753^859bb179-a0f1-11ed-ad7d-968c240e09c3 kubernetes.io/csi/ebs.csi.aws.com^vol-00c93e6eed8b9fa82 kubernetes.io/csi/ebs.csi.aws.com^vol-00d01ca9fd769afde],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/ebs.csi.aws.com^vol-00c93e6eed8b9fa82,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/ebs.csi.aws.com^vol-00d01ca9fd769afde,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-ephemeral-2753^859bb179-a0f1-11ed-ad7d-968c240e09c3,DevicePath:,},},Config:nil,},} Jan 30 22:58:30.057: INFO: Logging kubelet events for node ip-172-20-56-33.sa-east-1.compute.internal Jan 30 22:58:30.203: INFO: Logging pods the kubelet thinks is on node ip-172-20-56-33.sa-east-1.compute.internal Jan 30 22:58:30.357: INFO: pod-7e09216f-21ee-430a-b8f0-7e25e57c9d24 started at 2023-01-30 22:58:26 +0000 UTC (0+1 container statuses recorded) Jan 30 22:58:30.357: INFO: Container test-container ready: false, restart count 0 Jan 30 22:58:30.357: INFO: ss-0 started at 2023-01-30 22:58:07 +0000 UTC (0+1 container statuses recorded) Jan 30 22:58:30.357: INFO: Container webserver ready: false, restart count 0 Jan 30 22:58:30.357: INFO: agnhost-primary-pgn6l started at 2023-01-30 22:58:28 +0000 UTC (0+1 container statuses recorded) Jan 30 22:58:30.357: INFO: Container agnhost-primary ready: false, restart count 0 Jan 30 22:58:30.357: INFO: cilium-rrh22 started at 2023-01-30 22:39:07 +0000 UTC (1+1 container statuses recorded) Jan 30 22:58:30.357: INFO: Init container clean-cilium-state ready: true, restart count 0 Jan 30 22:58:30.358: INFO: Container cilium-agent ready: true, restart count 0 Jan 30 22:58:30.358: INFO: netserver-2 started at 2023-01-30 22:57:50 +0000 UTC (0+1 container statuses recorded) Jan 30 22:58:30.358: INFO: Container webserver ready: true, restart count 0 Jan 30 22:58:30.358: INFO: ebs-csi-node-846kf started at 2023-01-30 22:39:07 +0000 UTC (0+3 container statuses recorded) Jan 30 22:58:30.358: INFO: Container ebs-plugin ready: true, restart count 0 Jan 30 22:58:30.358: INFO: Container liveness-probe ready: true, restart count 0 Jan 30 22:58:30.358: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 30 22:58:30.358: INFO: coredns-867df8f45c-txv2h started at 2023-01-30 22:39:26 +0000 UTC (0+1 container statuses recorded) Jan 30 22:58:30.358: INFO: Container coredns ready: true, restart count 0 Jan 30 22:58:30.358: INFO: coredns-autoscaler-557ccb4c66-vs6br started at 2023-01-30 22:39:26 +0000 UTC (0+1 container statuses recorded) Jan 30 22:58:30.358: INFO: Container autoscaler ready: true, restart count 0 Jan 30 22:58:30.358: INFO: netserver-2 started at 2023-01-30 22:58:28 +0000 UTC (0+1 container statuses recorded) Jan 30 22:58:30.358: INFO: Container webserver ready: false, restart count 0 Jan 30 22:58:30.358: INFO: ss-1 started at 2023-01-30 22:58:17 +0000 UTC (0+1 container statuses recorded) Jan 30 22:58:30.358: INFO: Container webserver ready: false, restart count 0 Jan 30 22:58:30.358: INFO: csi-hostpathplugin-0 started at 2023-01-30 22:57:47 +0000 UTC (0+7 container statuses recorded) Jan 30 22:58:30.358: INFO: Container csi-attacher ready: true, restart count 0 Jan 30 22:58:30.358: INFO: Container csi-provisioner ready: true, restart count 0 Jan 30 22:58:30.358: INFO: Container csi-resizer ready: true, restart count 0 Jan 30 22:58:30.358: INFO: Container csi-snapshotter ready: true, restart count 0 Jan 30 22:58:30.358: INFO: Container hostpath ready: true, restart count 0 Jan 30 22:58:30.358: INFO: Container liveness-probe ready: true, restart count 0 Jan 30 22:58:30.358: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 30 22:58:30.358: INFO: inline-volume-tester-ms44c started at 2023-01-30 22:57:51 +0000 UTC (0+1 container statuses recorded) Jan 30 22:58:30.358: INFO: Container csi-volume-tester ready: true, restart count 0 Jan 30 22:58:30.358: INFO: hostexec-ip-172-20-56-33.sa-east-1.compute.internal-sh26v started at 2023-01-30 22:58:12 +0000 UTC (0+1 container statuses recorded) Jan 30 22:58:30.358: INFO: Container agnhost-container ready: true, restart count 0 Jan 30 22:58:30.916: INFO: Latency metrics for node ip-172-20-56-33.sa-east-1.compute.internal Jan 30 22:58:30.916: INFO: Logging node info for node ip-172-20-63-44.sa-east-1.compute.internal Jan 30 22:58:31.059: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-63-44.sa-east-1.compute.internal f7fcefff-e13d-4383-8796-cdc02ac9be26 10331 0 2023-01-30 22:37:10 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:c5.large beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:sa-east-1 failure-domain.beta.kubernetes.io/zone:sa-east-1a kops.k8s.io/kops-controller-pki: kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-63-44.sa-east-1.compute.internal kubernetes.io/os:linux kubernetes.io/role:master node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers: node.kubernetes.io/instance-type:c5.large topology.ebs.csi.aws.com/zone:sa-east-1a topology.kubernetes.io/region:sa-east-1 topology.kubernetes.io/zone:sa-east-1a] map[csi.volume.kubernetes.io/nodeid:{"ebs.csi.aws.com":"i-020b2e4354e67a776"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-30 22:37:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {protokube Update v1 2023-01-30 22:37:31 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kops.k8s.io/kops-controller-pki":{},"f:node-role.kubernetes.io/control-plane":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}} } {kops-controller Update v1 2023-01-30 22:37:33 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/master":{}}}} } {kube-controller-manager Update v1 2023-01-30 22:37:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.0.0/24\"":{}},"f:taints":{}}} } {kubelet Update v1 2023-01-30 22:38:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.0.0/24,DoNotUseExternalID:,ProviderID:aws:///sa-east-1a/i-020b2e4354e67a776,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[100.96.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{49753808896 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3862913024 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{44778427933 0} {<nil>} 44778427933 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3758055424 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-30 22:54:43 +0000 UTC,LastTransitionTime:2023-01-30 22:37:10 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-30 22:54:43 +0000 UTC,LastTransitionTime:2023-01-30 22:37:10 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-30 22:54:43 +0000 UTC,LastTransitionTime:2023-01-30 22:37:10 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-30 22:54:43 +0000 UTC,LastTransitionTime:2023-01-30 22:37:51 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.63.44,},NodeAddress{Type:ExternalIP,Address:18.230.69.200,},NodeAddress{Type:Hostname,Address:ip-172-20-63-44.sa-east-1.compute.internal,},NodeAddress{Type:InternalDNS,Address:ip-172-20-63-44.sa-east-1.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-18-230-69-200.sa-east-1.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec2866a629da92bef6391329a4d3d367,SystemUUID:ec2866a6-29da-92be-f639-1329a4d3d367,BootID:a943fe41-4bc4-4772-98e1-0ba5a25bcb7f,KernelVersion:5.15.0-1028-aws,OSImage:Ubuntu 22.04.1 LTS,ContainerRuntimeVersion:docker://20.10.17,KubeletVersion:v1.23.16,KubeProxyVersion:v1.23.16,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/etcdadm/etcd-manager@sha256:66a453db625abb268f4b3bbefc5a34a171d81e6e8796cecca54cfd71775c77c4 registry.k8s.io/etcdadm/etcd-manager:v3.0.20221209],SizeBytes:662209237,},ContainerImage{Names:[quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5 quay.io/cilium/cilium:v1.12.5],SizeBytes:456569528,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver-amd64:v1.23.16 registry.k8s.io/kube-apiserver-amd64:v1.23.16],SizeBytes:129999849,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager-amd64:v1.23.16 registry.k8s.io/kube-controller-manager-amd64:v1.23.16],SizeBytes:119940367,},ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.23.16 registry.k8s.io/kube-proxy-amd64:v1.23.16],SizeBytes:110832791,},ContainerImage{Names:[quay.io/cilium/operator@sha256:a6d24a006a6b92967ac90786b49bc1ac26e5477cf028cd1186efcfc2466484db quay.io/cilium/operator:v1.12.5],SizeBytes:110016956,},ContainerImage{Names:[registry.k8s.io/kops/kops-controller:1.26.0-beta.2],SizeBytes:106139107,},ContainerImage{Names:[registry.k8s.io/kops/dns-controller:1.26.0-beta.2],SizeBytes:102637092,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:f0c5de192d832e7c1daa6580d4a62e8fa6fc8eabc0917ae4cb7ed4d15e95b59e registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.1],SizeBytes:94471559,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:122bfb8c1edabb3c0edd63f06523e6940d958d19b3957dc7b1d6f81e9f1f6119 registry.k8s.io/sig-storage/csi-provisioner:v3.1.0],SizeBytes:57736070,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:9ebbf9f023e7b41ccee3d52afe39a89e3ddacdbb69269d583abfc25847cfd9e4 registry.k8s.io/sig-storage/csi-resizer:v1.4.0],SizeBytes:55464778,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:8b9c313c05f54fb04f8d430896f5f5904b6cb157df261501b29adc04d2b2dc7b registry.k8s.io/sig-storage/csi-attacher:v3.4.0],SizeBytes:54848595,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler-amd64:v1.23.16 registry.k8s.io/kube-scheduler-amd64:v1.23.16],SizeBytes:51852546,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:19582112,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:17611980,},ContainerImage{Names:[registry.k8s.io/kops/kube-apiserver-healthcheck:1.26.0-beta.2],SizeBytes:8786911,},ContainerImage{Names:[registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause:3.6],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 30 22:58:31.059: INFO: Logging kubelet events for node ip-172-20-63-44.sa-east-1.compute.internal Jan 30 22:58:31.205: INFO: Logging pods the kubelet thinks is on node ip-172-20-63-44.sa-east-1.compute.internal Jan 30 22:58:31.361: INFO: cilium-operator-c7bfc9f44-bhw9j started at 2023-01-30 22:37:32 +0000 UTC (0+1 container statuses recorded) Jan 30 22:58:31.361: INFO: Container cilium-operator ready: true, restart count 0 Jan 30 22:58:31.361: INFO: ebs-csi-controller-6dbc9bb9b4-zt6h6 started at 2023-01-30 22:37:32 +0000 UTC (0+5 container statuses recorded) Jan 30 22:58:31.361: INFO: Container csi-attacher ready: true, restart count 0 Jan 30 22:58:31.361: INFO: Container csi-provisioner ready: true, restart count 0 Jan 30 22:58:31.361: INFO: Container csi-resizer ready: true, restart count 0 Jan 30 22:58:31.361: INFO: Container ebs-plugin ready: true, restart count 0 Jan 30 22:58:31.361: INFO: Container liveness-probe ready: true, restart count 0 Jan 30 22:58:31.361: INFO: kube-apiserver-ip-172-20-63-44.sa-east-1.compute.internal started at 2023-01-30 22:36:20 +0000 UTC (0+2 container statuses recorded) Jan 30 22:58:31.361: INFO: Container healthcheck ready: true, restart count 0 Jan 30 22:58:31.361: INFO: Container kube-apiserver ready: true, restart count 1 Jan 30 22:58:31.361: INFO: kube-controller-manager-ip-172-20-63-44.sa-east-1.compute.internal started at 2023-01-30 22:36:20 +0000 UTC (0+1 container statuses recorded) Jan 30 22:58:31.361: INFO: Container kube-controller-manager ready: true, restart count 2 Jan 30 22:58:31.361: INFO: kops-controller-mrlzz started at 2023-01-30 22:37:31 +0000 UTC (0+1 container statuses recorded) Jan 30 22:58:31.361: INFO: Container kops-controller ready: true, restart count 0 Jan 30 22:58:31.361: INFO: dns-controller-58d7bbb845-vwkl6 started at 2023-01-30 22:37:32 +0000 UTC (0+1 container statuses recorded) Jan 30 22:58:31.361: INFO: Container dns-controller ready: true, restart count 0 Jan 30 22:58:31.361: INFO: cilium-bg2hw started at 2023-01-30 22:37:30 +0000 UTC (1+1 container statuses recorded) Jan 30 22:58:31.361: INFO: Init container clean-cilium-state ready: true, restart count 0 Jan 30 22:58:31.361: INFO: Container cilium-agent ready: true, restart count 0 Jan 30 22:58:31.361: INFO: etcd-manager-events-ip-172-20-63-44.sa-east-1.compute.internal started at 2023-01-30 22:36:20 +0000 UTC (0+1 container statuses recorded) Jan 30 22:58:31.361: INFO: Container etcd-manager ready: true, restart count 0 Jan 30 22:58:31.361: INFO: etcd-manager-main-ip-172-20-63-44.sa-east-1.compute.internal started at 2023-01-30 22:36:20 +0000 UTC (0+1 container statuses recorded) Jan 30 22:58:31.361: INFO: Container etcd-manager ready: true, restart count 0 Jan 30 22:58:31.361: INFO: kube-scheduler-ip-172-20-63-44.sa-east-1.compute.internal started at 2023-01-30 22:36:20 +0000 UTC (0+1 container statuses recorded) Jan 30 22:58:31.361: INFO: Container kube-scheduler ready: true, restart count 0 Jan 30 22:58:31.361: INFO: ebs-csi-node-crhx2 started at 2023-01-30 22:37:30 +0000 UTC (0+3 container statuses recorded) Jan 30 22:58:31.361: INFO: Container ebs-plugin ready: true, restart count 0 Jan 30 22:58:31.361: INFO: Container liveness-probe ready: true, restart count 0 Jan 30 22:58:31.361: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 30 22:58:31.816: INFO: Latency metrics for node ip-172-20-63-44.sa-east-1.compute.internal Jan 30 22:58:31.816: INFO: Logging node info for node ip-172-20-63-7.sa-east-1.compute.internal Jan 30 22:58:31.959: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-63-7.sa-east-1.compute.internal 8ee09ce8-ad2c-4347-b6b0-a38439fe8b38 17016 0 2023-01-30 22:39:06 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:sa-east-1 failure-domain.beta.kubernetes.io/zone:sa-east-1a kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-63-7.sa-east-1.compute.internal kubernetes.io/os:linux kubernetes.io/role:node node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:sa-east-1a topology.hostpath.csi/node:ip-172-20-63-7.sa-east-1.compute.internal topology.kubernetes.io/region:sa-east-1 topology.kubernetes.io/zone:sa-east-1a] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-ephemeral-5023":"ip-172-20-63-7.sa-east-1.compute.internal","csi-hostpath-ephemeral-991":"ip-172-20-63-7.sa-east-1.compute.internal","csi-mock-csi-mock-volumes-3826":"csi-mock-csi-mock-volumes-3826","ebs.csi.aws.com":"i-02d1af952f8cb9055"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kops-controller Update v1 2023-01-30 22:39:06 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/node":{}}}} } {kube-controller-manager Update v1 2023-01-30 22:39:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.2.0/24\"":{}}}} } {kubelet Update v1 2023-01-30 22:39:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-30 22:44:35 +0000 UTC FieldsV1 {"f:status":{"f:volumesAttached":{}}} status} {kubelet Update v1 2023-01-30 22:45:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.2.0/24,DoNotUseExternalID:,ProviderID:aws:///sa-east-1a/i-02d1af952f8cb9055,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{49753808896 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4051644416 0} {<nil>} 3956684Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{44778427933 0} {<nil>} 44778427933 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3946786816 0} {<nil>} 3854284Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-30 22:58:25 +0000 UTC,LastTransitionTime:2023-01-30 22:39:06 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-30 22:58:25 +0000 UTC,LastTransitionTime:2023-01-30 22:39:06 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-30 22:58:25 +0000 UTC,LastTransitionTime:2023-01-30 22:39:06 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-30 22:58:25 +0000 UTC,LastTransitionTime:2023-01-30 22:39:27 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.63.7,},NodeAddress{Type:ExternalIP,Address:52.67.57.31,},NodeAddress{Type:Hostname,Address:ip-172-20-63-7.sa-east-1.compute.internal,},NodeAddress{Type:InternalDNS,Address:ip-172-20-63-7.sa-east-1.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-52-67-57-31.sa-east-1.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec292dc0bb9ad655da1bd5cf4f054caa,SystemUUID:ec292dc0-bb9a-d655-da1b-d5cf4f054caa,BootID:3aa9a5e0-6628-460f-859b-942e6b19dc1d,KernelVersion:5.15.0-1028-aws,OSImage:Ubuntu 22.04.1 LTS,ContainerRuntimeVersion:docker://20.10.17,KubeletVersion:v1.23.16,KubeProxyVersion:v1.23.16,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5 quay.io/cilium/cilium:v1.12.5],SizeBytes:456569528,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:20f25f275d46aa728f7615a1ccc19c78b2ed89435bf943a44b339f70f45508e6 k8s.gcr.io/e2e-test-images/httpd:2.4.39-2],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e k8s.gcr.io/e2e-test-images/agnhost:2.39],SizeBytes:126872991,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.23.16 registry.k8s.io/kube-proxy-amd64:v1.23.16],SizeBytes:110832791,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:f0c5de192d832e7c1daa6580d4a62e8fa6fc8eabc0917ae4cb7ed4d15e95b59e registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.1],SizeBytes:94471559,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0],SizeBytes:56382329,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:4e74c0492bceddc598de1c90cc5bc14dcda94cb49fa9c5bad9d117c4834b5e08 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.1],SizeBytes:56379269,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:36c31f7e1f433c9634d24f876353e8646246d81a03c4e351202c2644daff1620 k8s.gcr.io/sig-storage/csi-resizer:v1.2.0],SizeBytes:53956308,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-snapshotter@sha256:ed98431376c9e944e19a465fe8ea944806714dd95416a0821096c78d66b579bd k8s.gcr.io/sig-storage/csi-snapshotter:v4.1.1],SizeBytes:53531990,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:c5be65d6679efabb969d9b019300d187437ae876f992c40911fd2892bbef3b36 k8s.gcr.io/sig-storage/csi-attacher:v3.2.0],SizeBytes:53500826,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:60ab9b3e6a030d3038c87c0d6bca2930f58d1d72823e6a4af09767dc83b696a2 k8s.gcr.io/sig-storage/csi-attacher:v3.2.1],SizeBytes:53499570,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:7a5ba58a44e0d749e0767e4e37315bcf6a61f33ce3185c1991848af4db0fb70a k8s.gcr.io/sig-storage/csi-resizer:v1.1.0],SizeBytes:49176472,},ContainerImage{Names:[k8s.gcr.io/sig-storage/hostpathplugin@sha256:232fe80174d60d520d36043103853a1d7ab4b7f3782cf43e45034f04ccda58ce k8s.gcr.io/sig-storage/hostpathplugin:v1.7.1],SizeBytes:29977921,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:19662887,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:19582112,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:2dee3fe5fe861bb66c3a4ac51114f3447a4cd35870e0f2e2b558c7a400d89589 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0],SizeBytes:18699436,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:17680993,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:17611980,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:529be2c9770add0cdd0c989115222ea9fc1be430c11095eb9f6dafcf98a36e2b k8s.gcr.io/sig-storage/livenessprobe:v2.4.0],SizeBytes:17175488,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 k8s.gcr.io/e2e-test-images/nginx:1.14-2],SizeBytes:16032814,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:1154361,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6 registry.k8s.io/pause:3.6],SizeBytes:682696,},},VolumesInUse:[kubernetes.io/csi/csi-mock-csi-mock-volumes-3826^4 kubernetes.io/csi/ebs.csi.aws.com^vol-0bb8ad584573965a4 kubernetes.io/csi/ebs.csi.aws.com^vol-0c717ad718a98bb5e kubernetes.io/csi/ebs.csi.aws.com^vol-0f9eeed3999f6acd3],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/ebs.csi.aws.com^vol-0bb8ad584573965a4,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/ebs.csi.aws.com^vol-0f9eeed3999f6acd3,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/ebs.csi.aws.com^vol-0c717ad718a98bb5e,DevicePath:,},},Config:nil,},} Jan 30 22:58:31.960: INFO: Logging kubelet events for node ip-172-20-63-7.sa-east-1.compute.internal Jan 30 22:58:32.121: INFO: Logging pods the kubelet thinks is on node ip-172-20-63-7.sa-east-1.compute.internal Jan 30 22:58:32.272: INFO: pod-9361d956-3a9e-45fa-92dc-ac8884faccaa started at 2023-01-30 22:50:26 +0000 UTC (0+1 container statuses recorded) Jan 30 22:58:32.272: INFO: Container write-pod ready: true, restart count 0 Jan 30 22:58:32.272: INFO: ebs-csi-node-wc6gx started at 2023-01-30 22:39:07 +0000 UTC (0+3 container statuses recorded) Jan 30 22:58:32.272: INFO: Container ebs-plugin ready: true, restart count 0 Jan 30 22:58:32.272: INFO: Container liveness-probe ready: true, restart count 0 Jan 30 22:58:32.272: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 30 22:58:32.272: INFO: agnhost-primary-xprws started at 2023-01-30 22:58:29 +0000 UTC (0+1 container statuses recorded) Jan 30 22:58:32.272: INFO: Container agnhost-primary ready: false, restart count 0 Jan 30 22:58:32.272: INFO: csi-hostpathplugin-0 started at 2023-01-30 22:57:35 +0000 UTC (0+7 container statuses recorded) Jan 30 22:58:32.272: INFO: Container csi-attacher ready: false, restart count 0 Jan 30 22:58:32.272: INFO: Container csi-provisioner ready: false, restart count 0 Jan 30 22:58:32.272: INFO: Container csi-resizer ready: false, restart count 0 Jan 30 22:58:32.272: INFO: Container csi-snapshotter ready: false, restart count 0 Jan 30 22:58:32.272: INFO: Container hostpath ready: false, restart count 0 Jan 30 22:58:32.272: INFO: Container liveness-probe ready: false, restart count 0 Jan 30 22:58:32.272: INFO: Container node-driver-registrar ready: false, restart count 0 Jan 30 22:58:32.272: INFO: pvc-volume-tester-7qg6f started at 2023-01-30 22:56:57 +0000 UTC (0+1 container statuses recorded) Jan 30 22:58:32.272: INFO: Container volume-tester ready: false, restart count 0 Jan 30 22:58:32.272: INFO: rs-n22dj started at 2023-01-30 22:56:32 +0000 UTC (0+1 container statuses recorded) Jan 30 22:58:32.272: INFO: Container donothing ready: false, restart count 0 Jan 30 22:58:32.272: INFO: netserver-3 started at 2023-01-30 22:58:28 +0000 UTC (0+1 container statuses recorded) Jan 30 22:58:32.272: INFO: Container webserver ready: false, restart count 0 Jan 30 22:58:32.272: INFO: csi-mockplugin-resizer-0 started at 2023-01-30 22:54:59 +0000 UTC (0+1 container statuses recorded) Jan 30 22:58:32.272: INFO: Container csi-resizer ready: true, restart count 0 Jan 30 22:58:32.272: INFO: csi-mockplugin-0 started at 2023-01-30 22:54:59 +0000 UTC (0+3 container statuses recorded) Jan 30 22:58:32.272: INFO: Container csi-provisioner ready: true, restart count 0 Jan 30 22:58:32.272: INFO: Container driver-registrar ready: true, restart count 0 Jan 30 22:58:32.272: INFO: Container mock ready: true, restart count 0 Jan 30 22:58:32.272: INFO: inline-volume-tester-g5whx started at 2023-01-30 22:57:51 +0000 UTC (0+1 container statuses recorded) Jan 30 22:58:32.272: INFO: Container csi-volume-tester ready: true, restart count 0 Jan 30 22:58:32.272: INFO: csi-hostpathplugin-0 started at 2023-01-30 22:57:50 +0000 UTC (0+7 container statuses recorded) Jan 30 22:58:32.272: INFO: Container csi-attacher ready: true, restart count 0 Jan 30 22:58:32.272: INFO: Container csi-provisioner ready: true, restart count 0 Jan 30 22:58:32.272: INFO: Container csi-resizer ready: true, restart count 0 Jan 30 22:58:32.272: INFO: Container csi-snapshotter ready: true, restart count 0 Jan 30 22:58:32.272: INFO: Container hostpath ready: true, restart count 0 Jan 30 22:58:32.272: INFO: Container liveness-probe ready: true, restart count 0 Jan 30 22:58:32.272: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 30 22:58:32.272: INFO: cilium-qtf8x started at 2023-01-30 22:39:07 +0000 UTC (1+1 container statuses recorded) Jan 30 22:58:32.272: INFO: Init container clean-cilium-state ready: true, restart count 0 Jan 30 22:58:32.272: INFO: Container cilium-agent ready: true, restart count 0 Jan 30 22:58:32.272: INFO: ss-1 started at 2023-01-30 22:57:48 +0000 UTC (0+1 container statuses recorded) Jan 30 22:58:32.272: INFO: Container webserver ready: true, restart count 0 Jan 30 22:58:32.272: INFO: pod-subpath-test-dynamicpv-6dxj started at 2023-01-30 22:58:23 +0000 UTC (1+1 container statuses recorded) Jan 30 22:58:32.272: INFO: Init container init-volume-dynamicpv-6dxj ready: true, restart count 0 Jan 30 22:58:32.272: INFO: Container test-container-subpath-dynamicpv-6dxj ready: true, restart count 0 Jan 30 22:58:32.272: INFO: netserver-3 started at 2023-01-30 22:57:50 +0000 UTC (0+1 container statuses recorded) Jan 30 22:58:32.272: INFO: Container webserver ready: true, restart count 0 Jan 30 22:58:32.273: INFO: csi-mockplugin-attacher-0 started at 2023-01-30 22:54:59 +0000 UTC (0+1 container statuses recorded) Jan 30 22:58:32.273: INFO: Container csi-attacher ready: true, restart count 0 Jan 30 22:58:32.748: INFO: Latency metrics for node ip-172-20-63-7.sa-east-1.compute.internal Jan 30 22:58:32.748: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "fsgroupchangepolicy-3075" for this suite.
Find pod-9361d956-3a9e-45fa-92dc-ac8884faccaa mentions in log files
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\sExternal\sStorage\s\[Driver\:\sebs\.csi\.aws\.com\]\s\[Testpattern\:\sDynamic\sPV\s\(delayed\sbinding\)\]\stopology\sshould\sprovision\sa\svolume\sand\sschedule\sa\spod\swith\sAllowedTopologies$'
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:164 Jan 30 22:53:58.004: Unexpected error: <*errors.errorString | 0xc00025e240>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:180from junit_09.xml
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 30 22:48:55.973: INFO: >>> kubeConfig: /root/.kube/config �[1mSTEP�[0m: Building a namespace api object, basename topology �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should provision a volume and schedule a pod with AllowedTopologies /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:164 Jan 30 22:48:57.134: INFO: found topology map[topology.ebs.csi.aws.com/zone:sa-east-1a] Jan 30 22:48:57.135: INFO: Creating storage class object and pvc object for driver - sc: &StorageClass{ObjectMeta:{topology-4876-e2e-sc4b76n 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},Provisioner:ebs.csi.aws.com,Parameters:map[string]string{encrypted: true,type: gp3,},ReclaimPolicy:nil,MountOptions:[],AllowVolumeExpansion:*true,VolumeBindingMode:*WaitForFirstConsumer,AllowedTopologies:[]TopologySelectorTerm{{[{topology.ebs.csi.aws.com/zone [sa-east-1a]}]},},}, pvc: &PersistentVolumeClaim{ObjectMeta:{ pvc- topology-4876 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{1073741824 0} {<nil>} 1Gi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*topology-4876-e2e-sc4b76n,VolumeMode:nil,DataSource:nil,DataSourceRef:nil,},Status:PersistentVolumeClaimStatus{Phase:,AccessModes:[],Capacity:ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},AllocatedResources:ResourceList{},ResizeStatus:nil,},} �[1mSTEP�[0m: Creating sc �[1mSTEP�[0m: Creating pvc �[1mSTEP�[0m: Creating pod Jan 30 22:53:58.004: FAIL: Unexpected error: <*errors.errorString | 0xc00025e240>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred Full Stack Trace k8s.io/kubernetes/test/e2e/storage/testsuites.(*topologyTestSuite).DefineTests.func3() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:180 +0x4cb k8s.io/kubernetes/test/e2e.RunE2ETests(0x24eec17?) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:133 +0x6b7 k8s.io/kubernetes/test/e2e.TestE2E(0x24602d9?) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc000871520, 0x72ecb90) /usr/local/go/src/testing/testing.go:1446 +0x10b created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1493 +0x35f �[1mSTEP�[0m: Deleting pod Jan 30 22:53:58.004: INFO: Deleting pod "pod-5e8df40f-c0b1-42a7-81f6-8305557f6e9f" in namespace "topology-4876" Jan 30 22:53:58.150: INFO: Wait up to 5m0s for pod "pod-5e8df40f-c0b1-42a7-81f6-8305557f6e9f" to be fully deleted �[1mSTEP�[0m: Deleting pvc Jan 30 22:54:02.724: INFO: Deleting PersistentVolumeClaim "pvc-xhncs" Jan 30 22:54:02.870: INFO: Waiting up to 3m0s for PersistentVolume pvc-0af2a32f-a75f-4edb-8673-f57373cfb115 to get deleted Jan 30 22:54:03.013: INFO: PersistentVolume pvc-0af2a32f-a75f-4edb-8673-f57373cfb115 found and phase=Released (143.239538ms) Jan 30 22:54:08.159: INFO: PersistentVolume pvc-0af2a32f-a75f-4edb-8673-f57373cfb115 found and phase=Released (5.288908234s) Jan 30 22:54:13.303: INFO: PersistentVolume pvc-0af2a32f-a75f-4edb-8673-f57373cfb115 found and phase=Released (10.433572906s) Jan 30 22:54:18.447: INFO: PersistentVolume pvc-0af2a32f-a75f-4edb-8673-f57373cfb115 was removed �[1mSTEP�[0m: Deleting sc [AfterEach] [Testpattern: Dynamic PV (delayed binding)] topology /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 �[1mSTEP�[0m: Collecting events from namespace "topology-4876". �[1mSTEP�[0m: Found 17 events. Jan 30 22:54:18.736: INFO: At 2023-01-30 22:48:57 +0000 UTC - event for pvc-xhncs: {persistentvolume-controller } WaitForFirstConsumer: waiting for first consumer to be created before binding Jan 30 22:54:18.736: INFO: At 2023-01-30 22:48:57 +0000 UTC - event for pvc-xhncs: {persistentvolume-controller } ExternalProvisioning: waiting for a volume to be created, either by external provisioner "ebs.csi.aws.com" or manually created by system administrator Jan 30 22:54:18.736: INFO: At 2023-01-30 22:48:57 +0000 UTC - event for pvc-xhncs: {ebs.csi.aws.com_ip-172-20-63-44_fec1cc0e-6bda-40bc-98e3-8a534e13e54e } Provisioning: External provisioner is provisioning volume for claim "topology-4876/pvc-xhncs" Jan 30 22:54:18.736: INFO: At 2023-01-30 22:49:00 +0000 UTC - event for pvc-xhncs: {ebs.csi.aws.com_ip-172-20-63-44_fec1cc0e-6bda-40bc-98e3-8a534e13e54e } ProvisioningSucceeded: Successfully provisioned volume pvc-0af2a32f-a75f-4edb-8673-f57373cfb115 Jan 30 22:54:18.736: INFO: At 2023-01-30 22:49:01 +0000 UTC - event for pod-5e8df40f-c0b1-42a7-81f6-8305557f6e9f: {default-scheduler } Scheduled: Successfully assigned topology-4876/pod-5e8df40f-c0b1-42a7-81f6-8305557f6e9f to ip-172-20-46-143.sa-east-1.compute.internal Jan 30 22:54:18.736: INFO: At 2023-01-30 22:49:03 +0000 UTC - event for pod-5e8df40f-c0b1-42a7-81f6-8305557f6e9f: {attachdetach-controller } SuccessfulAttachVolume: AttachVolume.Attach succeeded for volume "pvc-0af2a32f-a75f-4edb-8673-f57373cfb115" Jan 30 22:54:18.736: INFO: At 2023-01-30 22:49:13 +0000 UTC - event for pod-5e8df40f-c0b1-42a7-81f6-8305557f6e9f: {kubelet ip-172-20-46-143.sa-east-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "e476d600ef6cb1227bd15899536345b584130d069af1320ac4dfd870e1e4ae64" network for pod "pod-5e8df40f-c0b1-42a7-81f6-8305557f6e9f": networkPlugin cni failed to set up pod "pod-5e8df40f-c0b1-42a7-81f6-8305557f6e9f_topology-4876" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 30 22:54:18.736: INFO: At 2023-01-30 22:49:13 +0000 UTC - event for pod-5e8df40f-c0b1-42a7-81f6-8305557f6e9f: {kubelet ip-172-20-46-143.sa-east-1.compute.internal} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 22:54:18.736: INFO: At 2023-01-30 22:49:14 +0000 UTC - event for pod-5e8df40f-c0b1-42a7-81f6-8305557f6e9f: {kubelet ip-172-20-46-143.sa-east-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "08160ced630f0c5147900339e9a1f2e7f3e78a58e9748ff3b3001be7b950d983" network for pod "pod-5e8df40f-c0b1-42a7-81f6-8305557f6e9f": networkPlugin cni failed to set up pod "pod-5e8df40f-c0b1-42a7-81f6-8305557f6e9f_topology-4876" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 30 22:54:18.736: INFO: At 2023-01-30 22:49:16 +0000 UTC - event for pod-5e8df40f-c0b1-42a7-81f6-8305557f6e9f: {kubelet ip-172-20-46-143.sa-east-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "48636c984f9101ecbb6f811554d1a0d2ed90c5ab95abf0272a4e5811c631d601" network for pod "pod-5e8df40f-c0b1-42a7-81f6-8305557f6e9f": networkPlugin cni failed to set up pod "pod-5e8df40f-c0b1-42a7-81f6-8305557f6e9f_topology-4876" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 30 22:54:18.736: INFO: At 2023-01-30 22:49:17 +0000 UTC - event for pod-5e8df40f-c0b1-42a7-81f6-8305557f6e9f: {kubelet ip-172-20-46-143.sa-east-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "fe15835f0775ae56a4d1ea75f8a28b1e5a7261ddb703ddd54795537ce610adb4" network for pod "pod-5e8df40f-c0b1-42a7-81f6-8305557f6e9f": networkPlugin cni failed to set up pod "pod-5e8df40f-c0b1-42a7-81f6-8305557f6e9f_topology-4876" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 30 22:54:18.736: INFO: At 2023-01-30 22:49:19 +0000 UTC - event for pod-5e8df40f-c0b1-42a7-81f6-8305557f6e9f: {kubelet ip-172-20-46-143.sa-east-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "49dbb4ef64757e9572830525d81d3b3a873c39db29d72f5a5fa62f9a061888b1" network for pod "pod-5e8df40f-c0b1-42a7-81f6-8305557f6e9f": networkPlugin cni failed to set up pod "pod-5e8df40f-c0b1-42a7-81f6-8305557f6e9f_topology-4876" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 30 22:54:18.736: INFO: At 2023-01-30 22:49:20 +0000 UTC - event for pod-5e8df40f-c0b1-42a7-81f6-8305557f6e9f: {kubelet ip-172-20-46-143.sa-east-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "845ab74a39854a6376d2a05c8b4a8cdb5bbf5216ff2cdf269ee989e36e9ed4e1" network for pod "pod-5e8df40f-c0b1-42a7-81f6-8305557f6e9f": networkPlugin cni failed to set up pod "pod-5e8df40f-c0b1-42a7-81f6-8305557f6e9f_topology-4876" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 30 22:54:18.736: INFO: At 2023-01-30 22:49:22 +0000 UTC - event for pod-5e8df40f-c0b1-42a7-81f6-8305557f6e9f: {kubelet ip-172-20-46-143.sa-east-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "54433115eb331b835ae613a8e7ed6c7a15f2072a807782ebbb302de63aced305" network for pod "pod-5e8df40f-c0b1-42a7-81f6-8305557f6e9f": networkPlugin cni failed to set up pod "pod-5e8df40f-c0b1-42a7-81f6-8305557f6e9f_topology-4876" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 30 22:54:18.736: INFO: At 2023-01-30 22:49:23 +0000 UTC - event for pod-5e8df40f-c0b1-42a7-81f6-8305557f6e9f: {kubelet ip-172-20-46-143.sa-east-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "720934008a05ff70721ee0b3ebbfde016fece13b1d8671c873cbad63c0cd8abf" network for pod "pod-5e8df40f-c0b1-42a7-81f6-8305557f6e9f": networkPlugin cni failed to set up pod "pod-5e8df40f-c0b1-42a7-81f6-8305557f6e9f_topology-4876" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 30 22:54:18.736: INFO: At 2023-01-30 22:49:25 +0000 UTC - event for pod-5e8df40f-c0b1-42a7-81f6-8305557f6e9f: {kubelet ip-172-20-46-143.sa-east-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "9d8ce0edf4e5249e84ce7751d9eb82dd4e157dcf7d8b6f0d46b2b122fc1e820e" network for pod "pod-5e8df40f-c0b1-42a7-81f6-8305557f6e9f": networkPlugin cni failed to set up pod "pod-5e8df40f-c0b1-42a7-81f6-8305557f6e9f_topology-4876" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 30 22:54:18.736: INFO: At 2023-01-30 22:49:26 +0000 UTC - event for pod-5e8df40f-c0b1-42a7-81f6-8305557f6e9f: {kubelet ip-172-20-46-143.sa-east-1.compute.internal} FailedCreatePodSandBox: (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "4bf7b1daa673d3cbe1aaa6c55cce9e8ed8bce126a64272a10b42c1dacec298ba" network for pod "pod-5e8df40f-c0b1-42a7-81f6-8305557f6e9f": networkPlugin cni failed to set up pod "pod-5e8df40f-c0b1-42a7-81f6-8305557f6e9f_topology-4876" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 30 22:54:18.879: INFO: POD NODE PHASE GRACE CONDITIONS Jan 30 22:54:18.879: INFO: Jan 30 22:54:19.023: INFO: Logging node info for node ip-172-20-37-244.sa-east-1.compute.internal Jan 30 22:54:19.167: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-37-244.sa-east-1.compute.internal 1be0c21f-5cd5-49c3-937b-dcb7d30e890a 10033 0 2023-01-30 22:39:08 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:sa-east-1 failure-domain.beta.kubernetes.io/zone:sa-east-1a kubelet_cleanup:true kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-37-244.sa-east-1.compute.internal kubernetes.io/os:linux kubernetes.io/role:node node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:sa-east-1a topology.hostpath.csi/node:ip-172-20-37-244.sa-east-1.compute.internal topology.kubernetes.io/region:sa-east-1 topology.kubernetes.io/zone:sa-east-1a] map[csi.volume.kubernetes.io/nodeid:{"ebs.csi.aws.com":"i-02de6750f6f07da4c"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kops-controller Update v1 2023-01-30 22:39:08 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/node":{}}}} } {kube-controller-manager Update v1 2023-01-30 22:39:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.4.0/24\"":{}}}} } {kubelet Update v1 2023-01-30 22:39:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {e2e.test Update v1 2023-01-30 22:43:41 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubelet_cleanup":{}}}} } {kubelet Update v1 2023-01-30 22:49:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.4.0/24,DoNotUseExternalID:,ProviderID:aws:///sa-east-1a/i-02de6750f6f07da4c,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{49753808896 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4051644416 0} {<nil>} 3956684Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{44778427933 0} {<nil>} 44778427933 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3946786816 0} {<nil>} 3854284Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-30 22:54:18 +0000 UTC,LastTransitionTime:2023-01-30 22:39:08 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-30 22:54:18 +0000 UTC,LastTransitionTime:2023-01-30 22:39:08 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-30 22:54:18 +0000 UTC,LastTransitionTime:2023-01-30 22:39:08 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-30 22:54:18 +0000 UTC,LastTransitionTime:2023-01-30 22:39:28 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.37.244,},NodeAddress{Type:ExternalIP,Address:54.232.162.137,},NodeAddress{Type:Hostname,Address:ip-172-20-37-244.sa-east-1.compute.internal,},NodeAddress{Type:InternalDNS,Address:ip-172-20-37-244.sa-east-1.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-54-232-162-137.sa-east-1.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec2350fb0335a8c0068ce4bddeab7362,SystemUUID:ec2350fb-0335-a8c0-068c-e4bddeab7362,BootID:80522224-50f0-4d12-bc36-a8ad10d0e9d2,KernelVersion:5.15.0-1028-aws,OSImage:Ubuntu 22.04.1 LTS,ContainerRuntimeVersion:docker://20.10.17,KubeletVersion:v1.23.16,KubeProxyVersion:v1.23.16,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5 quay.io/cilium/cilium:v1.12.5],SizeBytes:456569528,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e k8s.gcr.io/e2e-test-images/agnhost:2.39],SizeBytes:126872991,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.23.16 registry.k8s.io/kube-proxy-amd64:v1.23.16],SizeBytes:110832791,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:f0c5de192d832e7c1daa6580d4a62e8fa6fc8eabc0917ae4cb7ed4d15e95b59e registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.1],SizeBytes:94471559,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:4e74c0492bceddc598de1c90cc5bc14dcda94cb49fa9c5bad9d117c4834b5e08 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.1],SizeBytes:56379269,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:36c31f7e1f433c9634d24f876353e8646246d81a03c4e351202c2644daff1620 k8s.gcr.io/sig-storage/csi-resizer:v1.2.0],SizeBytes:53956308,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-snapshotter@sha256:ed98431376c9e944e19a465fe8ea944806714dd95416a0821096c78d66b579bd k8s.gcr.io/sig-storage/csi-snapshotter:v4.1.1],SizeBytes:53531990,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:60ab9b3e6a030d3038c87c0d6bca2930f58d1d72823e6a4af09767dc83b696a2 k8s.gcr.io/sig-storage/csi-attacher:v3.2.1],SizeBytes:53499570,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:50249337,},ContainerImage{Names:[k8s.gcr.io/sig-storage/hostpathplugin@sha256:232fe80174d60d520d36043103853a1d7ab4b7f3782cf43e45034f04ccda58ce k8s.gcr.io/sig-storage/hostpathplugin:v1.7.1],SizeBytes:29977921,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:19582112,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:2dee3fe5fe861bb66c3a4ac51114f3447a4cd35870e0f2e2b558c7a400d89589 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0],SizeBytes:18699436,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:17611980,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:529be2c9770add0cdd0c989115222ea9fc1be430c11095eb9f6dafcf98a36e2b k8s.gcr.io/sig-storage/livenessprobe:v2.4.0],SizeBytes:17175488,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:1154361,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6 registry.k8s.io/pause:3.6],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 30 22:54:19.167: INFO: Logging kubelet events for node ip-172-20-37-244.sa-east-1.compute.internal Jan 30 22:54:19.320: INFO: Logging pods the kubelet thinks is on node ip-172-20-37-244.sa-east-1.compute.internal Jan 30 22:54:19.474: INFO: coredns-867df8f45c-q48mf started at 2023-01-30 22:39:48 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:19.474: INFO: Container coredns ready: true, restart count 0 Jan 30 22:54:19.474: INFO: ss2-1 started at 2023-01-30 22:43:37 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:19.474: INFO: Container webserver ready: true, restart count 0 Jan 30 22:54:19.474: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-5jrwl started at 2023-01-30 22:43:42 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:19.474: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:54:19.474: INFO: hostexec-ip-172-20-37-244.sa-east-1.compute.internal-mvnww started at 2023-01-30 22:54:13 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:19.474: INFO: Container agnhost-container ready: true, restart count 0 Jan 30 22:54:19.474: INFO: ebs-csi-node-wwnfq started at 2023-01-30 22:39:09 +0000 UTC (0+3 container statuses recorded) Jan 30 22:54:19.474: INFO: Container ebs-plugin ready: true, restart count 0 Jan 30 22:54:19.474: INFO: Container liveness-probe ready: true, restart count 0 Jan 30 22:54:19.474: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 30 22:54:19.474: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-jxx2t started at 2023-01-30 22:43:42 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:19.474: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:54:19.474: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-5qz9q started at 2023-01-30 22:43:42 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:19.474: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:54:19.474: INFO: cilium-2kmmh started at 2023-01-30 22:39:09 +0000 UTC (1+1 container statuses recorded) Jan 30 22:54:19.474: INFO: Init container clean-cilium-state ready: true, restart count 0 Jan 30 22:54:19.474: INFO: Container cilium-agent ready: true, restart count 0 Jan 30 22:54:19.474: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-jvpvp started at 2023-01-30 22:43:42 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:19.474: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:54:19.474: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-xbf2p started at 2023-01-30 22:43:43 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:19.474: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:54:19.474: INFO: rs-hh4qw started at 2023-01-30 22:49:21 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:19.474: INFO: Container donothing ready: false, restart count 0 Jan 30 22:54:19.474: INFO: hostexec-ip-172-20-37-244.sa-east-1.compute.internal-5pprs started at 2023-01-30 22:50:34 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:19.474: INFO: Container agnhost-container ready: true, restart count 0 Jan 30 22:54:19.474: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-g7dxk started at 2023-01-30 22:43:42 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:19.474: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:54:19.474: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-x4sjr started at 2023-01-30 22:43:43 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:19.474: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:54:19.474: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-7jbrf started at 2023-01-30 22:43:43 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:19.474: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:54:19.474: INFO: pod-subpath-test-preprovisionedpv-zc5t started at 2023-01-30 22:50:47 +0000 UTC (1+2 container statuses recorded) Jan 30 22:54:19.474: INFO: Init container test-init-subpath-preprovisionedpv-zc5t ready: false, restart count 0 Jan 30 22:54:19.474: INFO: Container test-container-subpath-preprovisionedpv-zc5t ready: false, restart count 0 Jan 30 22:54:19.474: INFO: Container test-container-volume-preprovisionedpv-zc5t ready: false, restart count 0 Jan 30 22:54:19.474: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-tvmsg started at 2023-01-30 22:43:42 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:19.474: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:54:19.474: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-vpcj2 started at 2023-01-30 22:43:43 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:19.474: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:54:19.941: INFO: Latency metrics for node ip-172-20-37-244.sa-east-1.compute.internal Jan 30 22:54:19.941: INFO: Logging node info for node ip-172-20-46-143.sa-east-1.compute.internal Jan 30 22:54:20.085: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-46-143.sa-east-1.compute.internal 4ac0f2fd-a06b-4650-9c4a-c2964727bf42 9940 0 2023-01-30 22:39:07 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:sa-east-1 failure-domain.beta.kubernetes.io/zone:sa-east-1a kubelet_cleanup:true kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-46-143.sa-east-1.compute.internal kubernetes.io/os:linux kubernetes.io/role:node node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:sa-east-1a topology.kubernetes.io/region:sa-east-1 topology.kubernetes.io/zone:sa-east-1a] map[csi.volume.kubernetes.io/nodeid:{"ebs.csi.aws.com":"i-0549a01609c77b117"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kops-controller Update v1 2023-01-30 22:39:07 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/node":{}}}} } {kube-controller-manager Update v1 2023-01-30 22:39:07 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.3.0/24\"":{}}}} } {kubelet Update v1 2023-01-30 22:39:07 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {e2e.test Update v1 2023-01-30 22:43:41 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubelet_cleanup":{}}}} } {kubelet Update v1 2023-01-30 22:54:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.3.0/24,DoNotUseExternalID:,ProviderID:aws:///sa-east-1a/i-0549a01609c77b117,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{49753808896 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4051652608 0} {<nil>} 3956692Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{44778427933 0} {<nil>} 44778427933 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3946795008 0} {<nil>} 3854292Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-30 22:54:09 +0000 UTC,LastTransitionTime:2023-01-30 22:39:07 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-30 22:54:09 +0000 UTC,LastTransitionTime:2023-01-30 22:39:07 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-30 22:54:09 +0000 UTC,LastTransitionTime:2023-01-30 22:39:07 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-30 22:54:09 +0000 UTC,LastTransitionTime:2023-01-30 22:39:28 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.46.143,},NodeAddress{Type:ExternalIP,Address:18.230.23.25,},NodeAddress{Type:Hostname,Address:ip-172-20-46-143.sa-east-1.compute.internal,},NodeAddress{Type:InternalDNS,Address:ip-172-20-46-143.sa-east-1.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-18-230-23-25.sa-east-1.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec25bac6007d23dab6609e76a6663500,SystemUUID:ec25bac6-007d-23da-b660-9e76a6663500,BootID:cd72b157-4d78-4df9-997f-bab559376690,KernelVersion:5.15.0-1028-aws,OSImage:Ubuntu 22.04.1 LTS,ContainerRuntimeVersion:docker://20.10.17,KubeletVersion:v1.23.16,KubeProxyVersion:v1.23.16,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5 quay.io/cilium/cilium:v1.12.5],SizeBytes:456569528,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:20f25f275d46aa728f7615a1ccc19c78b2ed89435bf943a44b339f70f45508e6 k8s.gcr.io/e2e-test-images/httpd:2.4.39-2],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e k8s.gcr.io/e2e-test-images/agnhost:2.39],SizeBytes:126872991,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.23.16 registry.k8s.io/kube-proxy-amd64:v1.23.16],SizeBytes:110832791,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:f0c5de192d832e7c1daa6580d4a62e8fa6fc8eabc0917ae4cb7ed4d15e95b59e registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.1],SizeBytes:94471559,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:19582112,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:17611980,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 k8s.gcr.io/e2e-test-images/nginx:1.14-2],SizeBytes:16032814,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:1154361,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6 registry.k8s.io/pause:3.6],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 30 22:54:20.085: INFO: Logging kubelet events for node ip-172-20-46-143.sa-east-1.compute.internal Jan 30 22:54:20.231: INFO: Logging pods the kubelet thinks is on node ip-172-20-46-143.sa-east-1.compute.internal Jan 30 22:54:20.382: INFO: adopt-release-qqtpc started at 2023-01-30 22:49:55 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:20.382: INFO: Container c ready: true, restart count 0 Jan 30 22:54:20.382: INFO: adopt-release-rpjrs started at 2023-01-30 22:49:55 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:20.382: INFO: Container c ready: false, restart count 0 Jan 30 22:54:20.382: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-dw5bz started at 2023-01-30 22:43:42 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:20.382: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:54:20.382: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-jh598 started at 2023-01-30 22:43:42 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:20.382: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:54:20.382: INFO: ss2-0 started at 2023-01-30 22:46:20 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:20.382: INFO: Container webserver ready: false, restart count 0 Jan 30 22:54:20.382: INFO: hostexec-ip-172-20-46-143.sa-east-1.compute.internal-s8w47 started at 2023-01-30 22:48:52 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:20.382: INFO: Container agnhost-container ready: true, restart count 0 Jan 30 22:54:20.382: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-2sw7v started at 2023-01-30 22:43:43 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:20.382: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:54:20.382: INFO: rs-8d5pg started at 2023-01-30 22:49:21 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:20.382: INFO: Container donothing ready: false, restart count 0 Jan 30 22:54:20.382: INFO: pod-secrets-025721a8-1f1a-425c-b117-8841c9b333cd started at 2023-01-30 22:49:58 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:20.382: INFO: Container secret-volume-test ready: false, restart count 0 Jan 30 22:54:20.382: INFO: pod-subpath-test-preprovisionedpv-z9fs started at 2023-01-30 22:50:49 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:20.382: INFO: Container test-container-subpath-preprovisionedpv-z9fs ready: false, restart count 0 Jan 30 22:54:20.382: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-q6pfk started at 2023-01-30 22:43:42 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:20.382: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:54:20.382: INFO: cilium-m624g started at 2023-01-30 22:39:08 +0000 UTC (1+1 container statuses recorded) Jan 30 22:54:20.382: INFO: Init container clean-cilium-state ready: true, restart count 0 Jan 30 22:54:20.382: INFO: Container cilium-agent ready: true, restart count 0 Jan 30 22:54:20.382: INFO: externalsvc-qmzsv started at 2023-01-30 22:49:02 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:20.382: INFO: Container externalsvc ready: false, restart count 0 Jan 30 22:54:20.382: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-bxpzz started at 2023-01-30 22:43:42 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:20.382: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:54:20.382: INFO: ebs-csi-node-qjvfh started at 2023-01-30 22:39:08 +0000 UTC (0+3 container statuses recorded) Jan 30 22:54:20.382: INFO: Container ebs-plugin ready: true, restart count 0 Jan 30 22:54:20.382: INFO: Container liveness-probe ready: true, restart count 0 Jan 30 22:54:20.382: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 30 22:54:20.382: INFO: busybox-host-aliases4c49ce25-2bdd-4be9-8511-41e5a85d0929 started at 2023-01-30 22:50:17 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:20.382: INFO: Container busybox-host-aliases4c49ce25-2bdd-4be9-8511-41e5a85d0929 ready: false, restart count 0 Jan 30 22:54:20.382: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-pt29g started at 2023-01-30 22:43:42 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:20.382: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:54:20.382: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-2sx9d started at 2023-01-30 22:43:43 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:20.382: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:54:20.382: INFO: pod-4f1d4caa-3b55-4d16-b486-82f59f49f567 started at 2023-01-30 22:50:54 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:20.382: INFO: Container test-container ready: false, restart count 0 Jan 30 22:54:20.382: INFO: ss2-2 started at 2023-01-30 22:44:16 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:20.382: INFO: Container webserver ready: true, restart count 0 Jan 30 22:54:20.382: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-2g6gh started at 2023-01-30 22:43:42 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:20.382: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:54:20.382: INFO: hostexec-ip-172-20-46-143.sa-east-1.compute.internal-g66zc started at 2023-01-30 22:49:26 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:20.382: INFO: Container agnhost-container ready: true, restart count 0 Jan 30 22:54:20.382: INFO: pod-cf5ae510-5ee5-443b-b0c3-086ca0deda69 started at 2023-01-30 22:49:31 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:20.382: INFO: Container write-pod ready: false, restart count 0 Jan 30 22:54:20.382: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-8w4h2 started at 2023-01-30 22:43:42 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:20.382: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:54:20.382: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-4cqsj started at 2023-01-30 22:43:43 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:20.382: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:54:21.093: INFO: Latency metrics for node ip-172-20-46-143.sa-east-1.compute.internal Jan 30 22:54:21.093: INFO: Logging node info for node ip-172-20-56-33.sa-east-1.compute.internal Jan 30 22:54:21.236: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-56-33.sa-east-1.compute.internal 954986f9-8a0c-45d3-a91c-b10fd929b91d 6773 0 2023-01-30 22:39:06 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:sa-east-1 failure-domain.beta.kubernetes.io/zone:sa-east-1a kubelet_cleanup:true kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-56-33.sa-east-1.compute.internal kubernetes.io/os:linux kubernetes.io/role:node node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:sa-east-1a topology.kubernetes.io/region:sa-east-1 topology.kubernetes.io/zone:sa-east-1a] map[csi.volume.kubernetes.io/nodeid:{"ebs.csi.aws.com":"i-09e0b8ffb97d8ede2"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kops-controller Update v1 2023-01-30 22:39:06 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/node":{}}}} } {kubelet Update v1 2023-01-30 22:39:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-30 22:39:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.1.0/24\"":{}}}} } {e2e.test Update v1 2023-01-30 22:43:42 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubelet_cleanup":{}}}} } {kubelet Update v1 2023-01-30 22:44:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.1.0/24,DoNotUseExternalID:,ProviderID:aws:///sa-east-1a/i-09e0b8ffb97d8ede2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{49753808896 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4051644416 0} {<nil>} 3956684Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{44778427933 0} {<nil>} 44778427933 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3946786816 0} {<nil>} 3854284Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-30 22:49:18 +0000 UTC,LastTransitionTime:2023-01-30 22:39:06 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-30 22:49:18 +0000 UTC,LastTransitionTime:2023-01-30 22:39:06 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-30 22:49:18 +0000 UTC,LastTransitionTime:2023-01-30 22:39:06 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-30 22:49:18 +0000 UTC,LastTransitionTime:2023-01-30 22:39:26 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.56.33,},NodeAddress{Type:ExternalIP,Address:54.233.226.185,},NodeAddress{Type:Hostname,Address:ip-172-20-56-33.sa-east-1.compute.internal,},NodeAddress{Type:InternalDNS,Address:ip-172-20-56-33.sa-east-1.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-54-233-226-185.sa-east-1.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec2d7bffa4e33f064a7a3db7aac73580,SystemUUID:ec2d7bff-a4e3-3f06-4a7a-3db7aac73580,BootID:749c0ee0-ccbf-48a5-9702-baf2673813b3,KernelVersion:5.15.0-1028-aws,OSImage:Ubuntu 22.04.1 LTS,ContainerRuntimeVersion:docker://20.10.17,KubeletVersion:v1.23.16,KubeProxyVersion:v1.23.16,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5 quay.io/cilium/cilium:v1.12.5],SizeBytes:456569528,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e k8s.gcr.io/e2e-test-images/agnhost:2.39],SizeBytes:126872991,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.23.16 registry.k8s.io/kube-proxy-amd64:v1.23.16],SizeBytes:110832791,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:f0c5de192d832e7c1daa6580d4a62e8fa6fc8eabc0917ae4cb7ed4d15e95b59e registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.1],SizeBytes:94471559,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:50249337,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:68d396900aeaa072c1f27289485fdac29834045a6f3ffe369bf389d830ef572d registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.6],SizeBytes:47910609,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:19582112,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:17611980,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 k8s.gcr.io/e2e-test-images/nginx:1.14-2],SizeBytes:16032814,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:1154361,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6 registry.k8s.io/pause:3.6],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 30 22:54:21.236: INFO: Logging kubelet events for node ip-172-20-56-33.sa-east-1.compute.internal Jan 30 22:54:21.382: INFO: Logging pods the kubelet thinks is on node ip-172-20-56-33.sa-east-1.compute.internal Jan 30 22:54:21.534: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-rj6w6 started at 2023-01-30 22:43:43 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:21.535: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:54:21.535: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-k9mvg started at 2023-01-30 22:43:43 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:21.535: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:54:21.535: INFO: csi-hostpathplugin-0 started at 2023-01-30 22:45:43 +0000 UTC (0+7 container statuses recorded) Jan 30 22:54:21.535: INFO: Container csi-attacher ready: false, restart count 0 Jan 30 22:54:21.535: INFO: Container csi-provisioner ready: false, restart count 0 Jan 30 22:54:21.535: INFO: Container csi-resizer ready: false, restart count 0 Jan 30 22:54:21.535: INFO: Container csi-snapshotter ready: false, restart count 0 Jan 30 22:54:21.535: INFO: Container hostpath ready: false, restart count 0 Jan 30 22:54:21.535: INFO: Container liveness-probe ready: false, restart count 0 Jan 30 22:54:21.535: INFO: Container node-driver-registrar ready: false, restart count 0 Jan 30 22:54:21.535: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-kl9wl started at 2023-01-30 22:43:43 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:21.535: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:54:21.535: INFO: hostexec-ip-172-20-56-33.sa-east-1.compute.internal-dgjvl started at 2023-01-30 22:51:01 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:21.535: INFO: Container agnhost-container ready: true, restart count 0 Jan 30 22:54:21.535: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-hx4t7 started at 2023-01-30 22:43:42 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:21.535: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:54:21.535: INFO: csi-hostpathplugin-0 started at 2023-01-30 22:49:23 +0000 UTC (0+7 container statuses recorded) Jan 30 22:54:21.535: INFO: Container csi-attacher ready: false, restart count 0 Jan 30 22:54:21.535: INFO: Container csi-provisioner ready: false, restart count 0 Jan 30 22:54:21.535: INFO: Container csi-resizer ready: false, restart count 0 Jan 30 22:54:21.535: INFO: Container csi-snapshotter ready: false, restart count 0 Jan 30 22:54:21.535: INFO: Container hostpath ready: false, restart count 0 Jan 30 22:54:21.535: INFO: Container liveness-probe ready: false, restart count 0 Jan 30 22:54:21.535: INFO: Container node-driver-registrar ready: false, restart count 0 Jan 30 22:54:21.535: INFO: local-injector started at 2023-01-30 22:50:01 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:21.535: INFO: Container local-injector ready: false, restart count 0 Jan 30 22:54:21.535: INFO: cilium-rrh22 started at 2023-01-30 22:39:07 +0000 UTC (1+1 container statuses recorded) Jan 30 22:54:21.535: INFO: Init container clean-cilium-state ready: true, restart count 0 Jan 30 22:54:21.535: INFO: Container cilium-agent ready: true, restart count 0 Jan 30 22:54:21.535: INFO: fail-once-non-local-7gtm7 started at 2023-01-30 22:44:15 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:21.535: INFO: Container c ready: false, restart count 0 Jan 30 22:54:21.535: INFO: fail-once-non-local-nvn9l started at 2023-01-30 22:43:53 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:21.535: INFO: Container c ready: false, restart count 0 Jan 30 22:54:21.535: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-cxdvn started at 2023-01-30 22:43:42 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:21.535: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:54:21.535: INFO: ebs-csi-node-846kf started at 2023-01-30 22:39:07 +0000 UTC (0+3 container statuses recorded) Jan 30 22:54:21.535: INFO: Container ebs-plugin ready: true, restart count 0 Jan 30 22:54:21.535: INFO: Container liveness-probe ready: true, restart count 0 Jan 30 22:54:21.535: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 30 22:54:21.535: INFO: coredns-867df8f45c-txv2h started at 2023-01-30 22:39:26 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:21.535: INFO: Container coredns ready: true, restart count 0 Jan 30 22:54:21.535: INFO: hostexec-ip-172-20-56-33.sa-east-1.compute.internal-tnngb started at 2023-01-30 22:49:40 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:21.535: INFO: Container agnhost-container ready: true, restart count 0 Jan 30 22:54:21.535: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-ctp24 started at 2023-01-30 22:43:42 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:21.535: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:54:21.535: INFO: exec-volume-test-preprovisionedpv-4xz9 started at 2023-01-30 22:51:15 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:21.535: INFO: Container exec-container-preprovisionedpv-4xz9 ready: false, restart count 0 Jan 30 22:54:21.535: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-fmwp2 started at 2023-01-30 22:43:42 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:21.535: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:54:21.535: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-g95mq started at 2023-01-30 22:43:42 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:21.535: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:54:21.535: INFO: coredns-autoscaler-557ccb4c66-vs6br started at 2023-01-30 22:39:26 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:21.535: INFO: Container autoscaler ready: true, restart count 0 Jan 30 22:54:21.535: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-rqjpx started at 2023-01-30 22:43:42 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:21.535: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:54:21.535: INFO: fail-once-non-local-ksmfx started at 2023-01-30 22:43:53 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:21.535: INFO: Container c ready: false, restart count 0 Jan 30 22:54:21.535: INFO: ss2-0 started at 2023-01-30 22:42:49 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:21.535: INFO: Container webserver ready: true, restart count 0 Jan 30 22:54:21.535: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-z46zz started at 2023-01-30 22:43:42 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:21.535: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: false, restart count 0 Jan 30 22:54:22.329: INFO: Latency metrics for node ip-172-20-56-33.sa-east-1.compute.internal Jan 30 22:54:22.330: INFO: Logging node info for node ip-172-20-63-44.sa-east-1.compute.internal Jan 30 22:54:22.473: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-63-44.sa-east-1.compute.internal f7fcefff-e13d-4383-8796-cdc02ac9be26 7035 0 2023-01-30 22:37:10 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:c5.large beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:sa-east-1 failure-domain.beta.kubernetes.io/zone:sa-east-1a kops.k8s.io/kops-controller-pki: kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-63-44.sa-east-1.compute.internal kubernetes.io/os:linux kubernetes.io/role:master node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers: node.kubernetes.io/instance-type:c5.large topology.ebs.csi.aws.com/zone:sa-east-1a topology.kubernetes.io/region:sa-east-1 topology.kubernetes.io/zone:sa-east-1a] map[csi.volume.kubernetes.io/nodeid:{"ebs.csi.aws.com":"i-020b2e4354e67a776"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-30 22:37:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {protokube Update v1 2023-01-30 22:37:31 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kops.k8s.io/kops-controller-pki":{},"f:node-role.kubernetes.io/control-plane":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}} } {kops-controller Update v1 2023-01-30 22:37:33 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/master":{}}}} } {kube-controller-manager Update v1 2023-01-30 22:37:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.0.0/24\"":{}},"f:taints":{}}} } {kubelet Update v1 2023-01-30 22:38:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.0.0/24,DoNotUseExternalID:,ProviderID:aws:///sa-east-1a/i-020b2e4354e67a776,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[100.96.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{49753808896 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3862913024 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{44778427933 0} {<nil>} 44778427933 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3758055424 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-30 22:49:37 +0000 UTC,LastTransitionTime:2023-01-30 22:37:10 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-30 22:49:37 +0000 UTC,LastTransitionTime:2023-01-30 22:37:10 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-30 22:49:37 +0000 UTC,LastTransitionTime:2023-01-30 22:37:10 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-30 22:49:37 +0000 UTC,LastTransitionTime:2023-01-30 22:37:51 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.63.44,},NodeAddress{Type:ExternalIP,Address:18.230.69.200,},NodeAddress{Type:Hostname,Address:ip-172-20-63-44.sa-east-1.compute.internal,},NodeAddress{Type:InternalDNS,Address:ip-172-20-63-44.sa-east-1.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-18-230-69-200.sa-east-1.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec2866a629da92bef6391329a4d3d367,SystemUUID:ec2866a6-29da-92be-f639-1329a4d3d367,BootID:a943fe41-4bc4-4772-98e1-0ba5a25bcb7f,KernelVersion:5.15.0-1028-aws,OSImage:Ubuntu 22.04.1 LTS,ContainerRuntimeVersion:docker://20.10.17,KubeletVersion:v1.23.16,KubeProxyVersion:v1.23.16,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/etcdadm/etcd-manager@sha256:66a453db625abb268f4b3bbefc5a34a171d81e6e8796cecca54cfd71775c77c4 registry.k8s.io/etcdadm/etcd-manager:v3.0.20221209],SizeBytes:662209237,},ContainerImage{Names:[quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5 quay.io/cilium/cilium:v1.12.5],SizeBytes:456569528,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver-amd64:v1.23.16 registry.k8s.io/kube-apiserver-amd64:v1.23.16],SizeBytes:129999849,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager-amd64:v1.23.16 registry.k8s.io/kube-controller-manager-amd64:v1.23.16],SizeBytes:119940367,},ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.23.16 registry.k8s.io/kube-proxy-amd64:v1.23.16],SizeBytes:110832791,},ContainerImage{Names:[quay.io/cilium/operator@sha256:a6d24a006a6b92967ac90786b49bc1ac26e5477cf028cd1186efcfc2466484db quay.io/cilium/operator:v1.12.5],SizeBytes:110016956,},ContainerImage{Names:[registry.k8s.io/kops/kops-controller:1.26.0-beta.2],SizeBytes:106139107,},ContainerImage{Names:[registry.k8s.io/kops/dns-controller:1.26.0-beta.2],SizeBytes:102637092,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:f0c5de192d832e7c1daa6580d4a62e8fa6fc8eabc0917ae4cb7ed4d15e95b59e registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.1],SizeBytes:94471559,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:122bfb8c1edabb3c0edd63f06523e6940d958d19b3957dc7b1d6f81e9f1f6119 registry.k8s.io/sig-storage/csi-provisioner:v3.1.0],SizeBytes:57736070,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:9ebbf9f023e7b41ccee3d52afe39a89e3ddacdbb69269d583abfc25847cfd9e4 registry.k8s.io/sig-storage/csi-resizer:v1.4.0],SizeBytes:55464778,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:8b9c313c05f54fb04f8d430896f5f5904b6cb157df261501b29adc04d2b2dc7b registry.k8s.io/sig-storage/csi-attacher:v3.4.0],SizeBytes:54848595,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler-amd64:v1.23.16 registry.k8s.io/kube-scheduler-amd64:v1.23.16],SizeBytes:51852546,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:19582112,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:17611980,},ContainerImage{Names:[registry.k8s.io/kops/kube-apiserver-healthcheck:1.26.0-beta.2],SizeBytes:8786911,},ContainerImage{Names:[registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause:3.6],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 30 22:54:22.473: INFO: Logging kubelet events for node ip-172-20-63-44.sa-east-1.compute.internal Jan 30 22:54:22.625: INFO: Logging pods the kubelet thinks is on node ip-172-20-63-44.sa-east-1.compute.internal Jan 30 22:54:22.773: INFO: etcd-manager-events-ip-172-20-63-44.sa-east-1.compute.internal started at 2023-01-30 22:36:20 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:22.773: INFO: Container etcd-manager ready: true, restart count 0 Jan 30 22:54:22.773: INFO: etcd-manager-main-ip-172-20-63-44.sa-east-1.compute.internal started at 2023-01-30 22:36:20 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:22.773: INFO: Container etcd-manager ready: true, restart count 0 Jan 30 22:54:22.773: INFO: kube-scheduler-ip-172-20-63-44.sa-east-1.compute.internal started at 2023-01-30 22:36:20 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:22.773: INFO: Container kube-scheduler ready: true, restart count 0 Jan 30 22:54:22.773: INFO: ebs-csi-node-crhx2 started at 2023-01-30 22:37:30 +0000 UTC (0+3 container statuses recorded) Jan 30 22:54:22.773: INFO: Container ebs-plugin ready: true, restart count 0 Jan 30 22:54:22.773: INFO: Container liveness-probe ready: true, restart count 0 Jan 30 22:54:22.773: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 30 22:54:22.773: INFO: cilium-bg2hw started at 2023-01-30 22:37:30 +0000 UTC (1+1 container statuses recorded) Jan 30 22:54:22.773: INFO: Init container clean-cilium-state ready: true, restart count 0 Jan 30 22:54:22.773: INFO: Container cilium-agent ready: true, restart count 0 Jan 30 22:54:22.773: INFO: ebs-csi-controller-6dbc9bb9b4-zt6h6 started at 2023-01-30 22:37:32 +0000 UTC (0+5 container statuses recorded) Jan 30 22:54:22.773: INFO: Container csi-attacher ready: true, restart count 0 Jan 30 22:54:22.773: INFO: Container csi-provisioner ready: true, restart count 0 Jan 30 22:54:22.773: INFO: Container csi-resizer ready: true, restart count 0 Jan 30 22:54:22.773: INFO: Container ebs-plugin ready: true, restart count 0 Jan 30 22:54:22.773: INFO: Container liveness-probe ready: true, restart count 0 Jan 30 22:54:22.773: INFO: kube-apiserver-ip-172-20-63-44.sa-east-1.compute.internal started at 2023-01-30 22:36:20 +0000 UTC (0+2 container statuses recorded) Jan 30 22:54:22.773: INFO: Container healthcheck ready: true, restart count 0 Jan 30 22:54:22.773: INFO: Container kube-apiserver ready: true, restart count 1 Jan 30 22:54:22.773: INFO: kube-controller-manager-ip-172-20-63-44.sa-east-1.compute.internal started at 2023-01-30 22:36:20 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:22.773: INFO: Container kube-controller-manager ready: true, restart count 2 Jan 30 22:54:22.773: INFO: kops-controller-mrlzz started at 2023-01-30 22:37:31 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:22.773: INFO: Container kops-controller ready: true, restart count 0 Jan 30 22:54:22.773: INFO: dns-controller-58d7bbb845-vwkl6 started at 2023-01-30 22:37:32 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:22.773: INFO: Container dns-controller ready: true, restart count 0 Jan 30 22:54:22.773: INFO: cilium-operator-c7bfc9f44-bhw9j started at 2023-01-30 22:37:32 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:22.773: INFO: Container cilium-operator ready: true, restart count 0 Jan 30 22:54:23.224: INFO: Latency metrics for node ip-172-20-63-44.sa-east-1.compute.internal Jan 30 22:54:23.225: INFO: Logging node info for node ip-172-20-63-7.sa-east-1.compute.internal Jan 30 22:54:23.368: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-63-7.sa-east-1.compute.internal 8ee09ce8-ad2c-4347-b6b0-a38439fe8b38 7860 0 2023-01-30 22:39:06 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:sa-east-1 failure-domain.beta.kubernetes.io/zone:sa-east-1a kubelet_cleanup:true kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-63-7.sa-east-1.compute.internal kubernetes.io/os:linux kubernetes.io/role:node node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:sa-east-1a topology.hostpath.csi/node:ip-172-20-63-7.sa-east-1.compute.internal topology.kubernetes.io/region:sa-east-1 topology.kubernetes.io/zone:sa-east-1a] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-ephemeral-5132":"ip-172-20-63-7.sa-east-1.compute.internal","ebs.csi.aws.com":"i-02d1af952f8cb9055"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kops-controller Update v1 2023-01-30 22:39:06 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/node":{}}}} } {kube-controller-manager Update v1 2023-01-30 22:39:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.2.0/24\"":{}}}} } {kubelet Update v1 2023-01-30 22:39:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {e2e.test Update v1 2023-01-30 22:43:42 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubelet_cleanup":{}}}} } {kube-controller-manager Update v1 2023-01-30 22:44:35 +0000 UTC FieldsV1 {"f:status":{"f:volumesAttached":{}}} status} {kubelet Update v1 2023-01-30 22:45:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.2.0/24,DoNotUseExternalID:,ProviderID:aws:///sa-east-1a/i-02d1af952f8cb9055,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{49753808896 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4051644416 0} {<nil>} 3956684Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{44778427933 0} {<nil>} 44778427933 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3946786816 0} {<nil>} 3854284Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-30 22:49:42 +0000 UTC,LastTransitionTime:2023-01-30 22:39:06 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-30 22:49:42 +0000 UTC,LastTransitionTime:2023-01-30 22:39:06 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-30 22:49:42 +0000 UTC,LastTransitionTime:2023-01-30 22:39:06 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-30 22:49:42 +0000 UTC,LastTransitionTime:2023-01-30 22:39:27 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.63.7,},NodeAddress{Type:ExternalIP,Address:52.67.57.31,},NodeAddress{Type:Hostname,Address:ip-172-20-63-7.sa-east-1.compute.internal,},NodeAddress{Type:InternalDNS,Address:ip-172-20-63-7.sa-east-1.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-52-67-57-31.sa-east-1.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec292dc0bb9ad655da1bd5cf4f054caa,SystemUUID:ec292dc0-bb9a-d655-da1b-d5cf4f054caa,BootID:3aa9a5e0-6628-460f-859b-942e6b19dc1d,KernelVersion:5.15.0-1028-aws,OSImage:Ubuntu 22.04.1 LTS,ContainerRuntimeVersion:docker://20.10.17,KubeletVersion:v1.23.16,KubeProxyVersion:v1.23.16,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5 quay.io/cilium/cilium:v1.12.5],SizeBytes:456569528,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e k8s.gcr.io/e2e-test-images/agnhost:2.39],SizeBytes:126872991,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.23.16 registry.k8s.io/kube-proxy-amd64:v1.23.16],SizeBytes:110832791,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:f0c5de192d832e7c1daa6580d4a62e8fa6fc8eabc0917ae4cb7ed4d15e95b59e registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.1],SizeBytes:94471559,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0],SizeBytes:56382329,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:4e74c0492bceddc598de1c90cc5bc14dcda94cb49fa9c5bad9d117c4834b5e08 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.1],SizeBytes:56379269,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:36c31f7e1f433c9634d24f876353e8646246d81a03c4e351202c2644daff1620 k8s.gcr.io/sig-storage/csi-resizer:v1.2.0],SizeBytes:53956308,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-snapshotter@sha256:ed98431376c9e944e19a465fe8ea944806714dd95416a0821096c78d66b579bd k8s.gcr.io/sig-storage/csi-snapshotter:v4.1.1],SizeBytes:53531990,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:c5be65d6679efabb969d9b019300d187437ae876f992c40911fd2892bbef3b36 k8s.gcr.io/sig-storage/csi-attacher:v3.2.0],SizeBytes:53500826,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:60ab9b3e6a030d3038c87c0d6bca2930f58d1d72823e6a4af09767dc83b696a2 k8s.gcr.io/sig-storage/csi-attacher:v3.2.1],SizeBytes:53499570,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:7a5ba58a44e0d749e0767e4e37315bcf6a61f33ce3185c1991848af4db0fb70a k8s.gcr.io/sig-storage/csi-resizer:v1.1.0],SizeBytes:49176472,},ContainerImage{Names:[k8s.gcr.io/sig-storage/hostpathplugin@sha256:232fe80174d60d520d36043103853a1d7ab4b7f3782cf43e45034f04ccda58ce k8s.gcr.io/sig-storage/hostpathplugin:v1.7.1],SizeBytes:29977921,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:19662887,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:19582112,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:2dee3fe5fe861bb66c3a4ac51114f3447a4cd35870e0f2e2b558c7a400d89589 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0],SizeBytes:18699436,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:17680993,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:17611980,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:529be2c9770add0cdd0c989115222ea9fc1be430c11095eb9f6dafcf98a36e2b k8s.gcr.io/sig-storage/livenessprobe:v2.4.0],SizeBytes:17175488,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:1154361,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6 registry.k8s.io/pause:3.6],SizeBytes:682696,},},VolumesInUse:[kubernetes.io/csi/ebs.csi.aws.com^vol-0f9eeed3999f6acd3],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/ebs.csi.aws.com^vol-0f9eeed3999f6acd3,DevicePath:,},},Config:nil,},} Jan 30 22:54:23.369: INFO: Logging kubelet events for node ip-172-20-63-7.sa-east-1.compute.internal Jan 30 22:54:23.514: INFO: Logging pods the kubelet thinks is on node ip-172-20-63-7.sa-east-1.compute.internal Jan 30 22:54:23.667: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-9t9kb started at 2023-01-30 22:43:42 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:23.667: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:54:23.667: INFO: ss2-1 started at 2023-01-30 22:45:23 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:23.667: INFO: Container webserver ready: false, restart count 0 Jan 30 22:54:23.667: INFO: cilium-qtf8x started at 2023-01-30 22:39:07 +0000 UTC (1+1 container statuses recorded) Jan 30 22:54:23.667: INFO: Init container clean-cilium-state ready: true, restart count 0 Jan 30 22:54:23.667: INFO: Container cilium-agent ready: true, restart count 0 Jan 30 22:54:23.667: INFO: hostexec-ip-172-20-63-7.sa-east-1.compute.internal-grfw9 started at 2023-01-30 22:45:37 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:23.667: INFO: Container agnhost-container ready: true, restart count 0 Jan 30 22:54:23.667: INFO: httpd started at 2023-01-30 22:52:48 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:23.667: INFO: Container httpd ready: false, restart count 0 Jan 30 22:54:23.667: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-nft6k started at 2023-01-30 22:43:42 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:23.667: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:54:23.667: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-zrntw started at 2023-01-30 22:43:42 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:23.667: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:54:23.667: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-fp6pt started at 2023-01-30 22:43:42 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:23.667: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:54:23.667: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-qbtpc started at 2023-01-30 22:43:43 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:23.667: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:54:23.667: INFO: externalsvc-c5qz7 started at 2023-01-30 22:49:02 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:23.667: INFO: Container externalsvc ready: false, restart count 0 Jan 30 22:54:23.667: INFO: termination-message-container273f6b47-08fc-4262-a3ab-3617fa0ad4c2 started at 2023-01-30 22:50:49 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:23.667: INFO: Container termination-message-container ready: false, restart count 0 Jan 30 22:54:23.667: INFO: pod-9361d956-3a9e-45fa-92dc-ac8884faccaa started at 2023-01-30 22:50:26 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:23.667: INFO: Container write-pod ready: false, restart count 0 Jan 30 22:54:23.667: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-v8ln8 started at 2023-01-30 22:43:43 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:23.667: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:54:23.667: INFO: csi-mockplugin-0 started at 2023-01-30 22:49:11 +0000 UTC (0+3 container statuses recorded) Jan 30 22:54:23.667: INFO: Container csi-provisioner ready: false, restart count 0 Jan 30 22:54:23.667: INFO: Container driver-registrar ready: false, restart count 0 Jan 30 22:54:23.667: INFO: Container mock ready: false, restart count 0 Jan 30 22:54:23.667: INFO: startup-d0748011-46a8-4bb4-9fe0-3c4baf5fbfed started at 2023-01-30 22:49:08 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:23.667: INFO: Container busybox ready: false, restart count 0 Jan 30 22:54:23.667: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-cnpbk started at 2023-01-30 22:43:42 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:23.667: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: false, restart count 0 Jan 30 22:54:23.667: INFO: ebs-csi-node-wc6gx started at 2023-01-30 22:39:07 +0000 UTC (0+3 container statuses recorded) Jan 30 22:54:23.667: INFO: Container ebs-plugin ready: true, restart count 0 Jan 30 22:54:23.667: INFO: Container liveness-probe ready: true, restart count 0 Jan 30 22:54:23.667: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 30 22:54:23.667: INFO: rs-4k8s4 started at 2023-01-30 22:49:21 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:23.667: INFO: Container donothing ready: false, restart count 0 Jan 30 22:54:23.667: INFO: pod-70b62feb-0f03-4bb7-97a6-9bed39f38a55 started at 2023-01-30 22:50:29 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:23.667: INFO: Container write-pod ready: false, restart count 0 Jan 30 22:54:23.667: INFO: csi-hostpathplugin-0 started at 2023-01-30 22:43:41 +0000 UTC (0+7 container statuses recorded) Jan 30 22:54:23.667: INFO: Container csi-attacher ready: true, restart count 0 Jan 30 22:54:23.667: INFO: Container csi-provisioner ready: true, restart count 0 Jan 30 22:54:23.667: INFO: Container csi-resizer ready: true, restart count 0 Jan 30 22:54:23.667: INFO: Container csi-snapshotter ready: true, restart count 0 Jan 30 22:54:23.667: INFO: Container hostpath ready: true, restart count 0 Jan 30 22:54:23.667: INFO: Container liveness-probe ready: true, restart count 0 Jan 30 22:54:23.667: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 30 22:54:23.667: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-56dt8 started at 2023-01-30 22:43:42 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:23.667: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:54:23.667: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-2m4f6 started at 2023-01-30 22:43:43 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:23.667: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:54:23.667: INFO: pod-d8cff309-3d6a-4ce5-9ac9-b57de7155461 started at 2023-01-30 22:45:48 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:23.667: INFO: Container write-pod ready: true, restart count 0 Jan 30 22:54:23.667: INFO: csi-mockplugin-attacher-0 started at 2023-01-30 22:49:11 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:23.667: INFO: Container csi-attacher ready: true, restart count 0 Jan 30 22:54:23.667: INFO: inline-volume-tester-62nrc started at 2023-01-30 22:43:42 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:23.667: INFO: Container csi-volume-tester ready: false, restart count 0 Jan 30 22:54:23.667: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-fq2r6 started at 2023-01-30 22:43:42 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:23.667: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:54:24.162: INFO: Latency metrics for node ip-172-20-63-7.sa-east-1.compute.internal Jan 30 22:54:24.162: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "topology-4876" for this suite.
Filter through log files
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-apps\]\sStatefulSet\sBasic\sStatefulSet\sfunctionality\s\[StatefulSetBasic\]\sshould\sperform\scanary\supdates\sand\sphased\srolling\supdates\sof\stemplate\smodifications\s\[Conformance\]$'
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Jan 30 22:56:20.946: Failed waiting for pods to enter running: timed out waiting for the condition /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/statefulset/wait.go:80from junit_07.xml
[BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 30 22:43:06.760: INFO: >>> kubeConfig: /root/.kube/config �[1mSTEP�[0m: Building a namespace api object, basename statefulset �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:94 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:109 �[1mSTEP�[0m: Creating service test in namespace statefulset-5460 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a new StatefulSet Jan 30 22:43:08.189: INFO: Found 1 stateful pods, waiting for 3 Jan 30 22:43:18.335: INFO: Found 1 stateful pods, waiting for 3 Jan 30 22:43:28.332: INFO: Found 1 stateful pods, waiting for 3 Jan 30 22:43:38.334: INFO: Found 2 stateful pods, waiting for 3 Jan 30 22:43:48.333: INFO: Found 2 stateful pods, waiting for 3 Jan 30 22:43:58.334: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 30 22:43:58.334: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 30 22:43:58.334: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Jan 30 22:44:08.335: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 30 22:44:08.335: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 30 22:44:08.335: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Jan 30 22:44:18.332: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 30 22:44:18.332: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 30 22:44:18.332: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Jan 30 22:44:28.332: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 30 22:44:28.332: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 30 22:44:28.332: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Jan 30 22:44:38.332: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 30 22:44:38.332: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 30 22:44:38.332: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Jan 30 22:44:48.332: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 30 22:44:48.332: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 30 22:44:48.332: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Jan 30 22:44:58.332: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 30 22:44:58.333: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 30 22:44:58.333: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Jan 30 22:45:08.332: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 30 22:45:08.333: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 30 22:45:08.333: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Jan 30 22:45:18.332: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 30 22:45:18.332: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 30 22:45:18.332: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Jan 30 22:45:28.332: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 30 22:45:28.332: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 30 22:45:28.332: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Jan 30 22:45:38.333: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 30 22:45:38.333: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 30 22:45:38.333: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Jan 30 22:45:48.333: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 30 22:45:48.333: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 30 22:45:48.333: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Jan 30 22:45:58.332: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 30 22:45:58.332: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 30 22:45:58.332: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true �[1mSTEP�[0m: Updating stateful set template: update image from k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 to k8s.gcr.io/e2e-test-images/httpd:2.4.39-2 Jan 30 22:45:59.055: INFO: Updating stateful set ss2 �[1mSTEP�[0m: Creating a new revision �[1mSTEP�[0m: Not applying an update when the partition is greater than the number of replicas �[1mSTEP�[0m: Performing a canary update Jan 30 22:45:59.637: INFO: Updating stateful set ss2 Jan 30 22:45:59.921: INFO: Waiting for Pod statefulset-5460/ss2-2 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Jan 30 22:46:10.207: INFO: Waiting for Pod statefulset-5460/ss2-2 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb �[1mSTEP�[0m: Restoring Pods to the correct revision when they are deleted Jan 30 22:46:20.658: INFO: Found 2 stateful pods, waiting for 3 Jan 30 22:46:30.801: INFO: Found 2 stateful pods, waiting for 3 Jan 30 22:46:40.801: INFO: Found 2 stateful pods, waiting for 3 Jan 30 22:46:50.803: INFO: Found 2 stateful pods, waiting for 3 Jan 30 22:47:00.803: INFO: Found 2 stateful pods, waiting for 3 Jan 30 22:47:10.801: INFO: Found 2 stateful pods, waiting for 3 Jan 30 22:47:20.803: INFO: Found 2 stateful pods, waiting for 3 Jan 30 22:47:30.804: INFO: Found 2 stateful pods, waiting for 3 Jan 30 22:47:40.801: INFO: Found 2 stateful pods, waiting for 3 Jan 30 22:47:50.803: INFO: Found 2 stateful pods, waiting for 3 Jan 30 22:48:00.803: INFO: Found 2 stateful pods, waiting for 3 Jan 30 22:48:10.801: INFO: Found 2 stateful pods, waiting for 3 Jan 30 22:48:20.804: INFO: Found 2 stateful pods, waiting for 3 Jan 30 22:48:30.804: INFO: Found 2 stateful pods, waiting for 3 Jan 30 22:48:40.801: INFO: Found 2 stateful pods, waiting for 3 Jan 30 22:48:50.801: INFO: Found 2 stateful pods, waiting for 3 Jan 30 22:49:00.814: INFO: Found 2 stateful pods, waiting for 3 Jan 30 22:49:10.804: INFO: Found 2 stateful pods, waiting for 3 Jan 30 22:49:20.804: INFO: Found 2 stateful pods, waiting for 3 Jan 30 22:49:30.803: INFO: Found 2 stateful pods, waiting for 3 Jan 30 22:49:40.804: INFO: Found 2 stateful pods, waiting for 3 Jan 30 22:49:50.805: INFO: Found 2 stateful pods, waiting for 3 Jan 30 22:50:00.804: INFO: Found 2 stateful pods, waiting for 3 Jan 30 22:50:10.803: INFO: Found 2 stateful pods, waiting for 3 Jan 30 22:50:20.804: INFO: Found 2 stateful pods, waiting for 3 Jan 30 22:50:30.802: INFO: Found 2 stateful pods, waiting for 3 Jan 30 22:50:40.803: INFO: Found 2 stateful pods, waiting for 3 Jan 30 22:50:50.804: INFO: Found 2 stateful pods, waiting for 3 Jan 30 22:51:00.804: INFO: Found 2 stateful pods, waiting for 3 Jan 30 22:51:10.804: INFO: Found 2 stateful pods, waiting for 3 Jan 30 22:51:20.803: INFO: Found 2 stateful pods, waiting for 3 Jan 30 22:51:30.804: INFO: Found 2 stateful pods, waiting for 3 Jan 30 22:51:40.801: INFO: Found 2 stateful pods, waiting for 3 Jan 30 22:51:50.804: INFO: Found 2 stateful pods, waiting for 3 Jan 30 22:52:00.803: INFO: Found 2 stateful pods, waiting for 3 Jan 30 22:52:10.804: INFO: Found 2 stateful pods, waiting for 3 Jan 30 22:52:20.810: INFO: Found 2 stateful pods, waiting for 3 Jan 30 22:52:30.802: INFO: Found 2 stateful pods, waiting for 3 Jan 30 22:52:40.801: INFO: Found 2 stateful pods, waiting for 3 Jan 30 22:52:50.801: INFO: Found 2 stateful pods, waiting for 3 Jan 30 22:53:00.802: INFO: Found 2 stateful pods, waiting for 3 Jan 30 22:53:10.804: INFO: Found 2 stateful pods, waiting for 3 Jan 30 22:53:20.804: INFO: Found 2 stateful pods, waiting for 3 Jan 30 22:53:30.803: INFO: Found 2 stateful pods, waiting for 3 Jan 30 22:53:40.801: INFO: Found 2 stateful pods, waiting for 3 Jan 30 22:53:50.804: INFO: Found 2 stateful pods, waiting for 3 Jan 30 22:54:00.804: INFO: Found 2 stateful pods, waiting for 3 Jan 30 22:54:10.804: INFO: Found 2 stateful pods, waiting for 3 Jan 30 22:54:20.804: INFO: Found 2 stateful pods, waiting for 3 Jan 30 22:54:30.805: INFO: Found 2 stateful pods, waiting for 3 Jan 30 22:54:40.801: INFO: Found 2 stateful pods, waiting for 3 Jan 30 22:54:50.801: INFO: Found 2 stateful pods, waiting for 3 Jan 30 22:55:00.804: INFO: Found 2 stateful pods, waiting for 3 Jan 30 22:55:10.803: INFO: Found 2 stateful pods, waiting for 3 Jan 30 22:55:20.801: INFO: Found 2 stateful pods, waiting for 3 Jan 30 22:55:30.804: INFO: Found 2 stateful pods, waiting for 3 Jan 30 22:55:40.803: INFO: Found 2 stateful pods, waiting for 3 Jan 30 22:55:50.804: INFO: Found 2 stateful pods, waiting for 3 Jan 30 22:56:00.805: INFO: Found 2 stateful pods, waiting for 3 Jan 30 22:56:10.805: INFO: Found 2 stateful pods, waiting for 3 Jan 30 22:56:20.804: INFO: Found 2 stateful pods, waiting for 3 Jan 30 22:56:20.946: INFO: Found 2 stateful pods, waiting for 3 Jan 30 22:56:20.946: FAIL: Failed waiting for pods to enter running: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunningAndReady(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/statefulset/wait.go:80 k8s.io/kubernetes/test/e2e/apps.glob..func9.2.8() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:425 +0x1b1f k8s.io/kubernetes/test/e2e.RunE2ETests(0x24eec17?) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:133 +0x6b7 k8s.io/kubernetes/test/e2e.TestE2E(0x24602d9?) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc000dae680, 0x72ecb90) /usr/local/go/src/testing/testing.go:1446 +0x10b created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1493 +0x35f [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:120 Jan 30 22:56:21.089: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/f32618b1-a0ed-11ed-9994-daecb65d57ab/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-u2204-k23-ko26-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=statefulset-5460 describe po ss2-0' Jan 30 22:56:21.905: INFO: stderr: "" Jan 30 22:56:21.905: INFO: stdout: "Name: ss2-0\nNamespace: statefulset-5460\nPriority: 0\nNode: ip-172-20-46-143.sa-east-1.compute.internal/172.20.46.143\nStart Time: Mon, 30 Jan 2023 22:46:20 +0000\nLabels: baz=blah\n controller-revision-hash=ss2-57bbdd95cb\n foo=bar\n statefulset.kubernetes.io/pod-name=ss2-0\nAnnotations: <none>\nStatus: Running\nIP: 172.20.39.57\nIPs:\n IP: 172.20.39.57\nControlled By: StatefulSet/ss2\nContainers:\n webserver:\n Container ID: docker://cc91185a67daccd827c46f47253eb0348f2b8ee65a1d16281b22f3d86ed6142b\n Image: k8s.gcr.io/e2e-test-images/httpd:2.4.38-2\n Image ID: docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3\n Port: <none>\n Host Port: <none>\n State: Running\n Started: Mon, 30 Jan 2023 22:56:12 +0000\n Ready: True\n Restart Count: 0\n Readiness: http-get http://:80/index.html delay=0s timeout=1s period=1s #success=1 #failure=1\n Environment: <none>\n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-brj54 (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n kube-api-access-brj54:\n Type: Projected (a volume that contains injected data from multiple sources)\n TokenExpirationSeconds: 3607\n ConfigMapName: kube-root-ca.crt\n ConfigMapOptional: <nil>\n DownwardAPI: true\nQoS Class: BestEffort\nNode-Selectors: <none>\nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 10m default-scheduler Successfully assigned statefulset-5460/ss2-0 to ip-172-20-46-143.sa-east-1.compute.internal\n Warning FailedCreatePodSandBox 10m kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container \"b8e46e4438a9ed5918111925eb948a7090a6de743af3052a0f8c2391d3d04d1d\" network for pod \"ss2-0\": networkPlugin cni failed to set up pod \"ss2-0_statefulset-5460\" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available\n Warning FailedCreatePodSandBox 9m57s kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container \"db3b79a169ad5619a32dee8f48660e28097116236cd9a9b45fd857c4511cdbad\" network for pod \"ss2-0\": networkPlugin cni failed to set up pod \"ss2-0_statefulset-5460\" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available\n Warning FailedCreatePodSandBox 9m57s kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container \"fbff9d01d6c1ec8e5dc2a6ef6b9ec297ba8d9c321f1adeb8a31c6866920e9c8a\" network for pod \"ss2-0\": networkPlugin cni failed to set up pod \"ss2-0_statefulset-5460\" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available\n Warning FailedCreatePodSandBox 9m55s kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container \"69708343fb06aa53d644928db35d1e1762743c360ed4f5eb1dde321027cb6dbf\" network for pod \"ss2-0\": networkPlugin cni failed to set up pod \"ss2-0_statefulset-5460\" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available\n Warning FailedCreatePodSandBox 9m54s kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container \"02ec8db55a7b77932ca18f738d8de162933704f713ebbae4458af5849481f1ad\" network for pod \"ss2-0\": networkPlugin cni failed to set up pod \"ss2-0_statefulset-5460\" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available\n Warning FailedCreatePodSandBox 9m53s kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container \"512924a72a4d7fecb88265362ded39c004e3b35f121424317d2c4e902dfa708e\" network for pod \"ss2-0\": networkPlugin cni failed to set up pod \"ss2-0_statefulset-5460\" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available\n Warning FailedCreatePodSandBox 9m52s kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container \"f8dfc358509584b5ed72acd4148ab325e52cf87fbd20e39db1e54cbec86ca091\" network for pod \"ss2-0\": networkPlugin cni failed to set up pod \"ss2-0_statefulset-5460\" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available\n Warning FailedCreatePodSandBox 9m51s kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container \"63c8ef6de05ce41db323ec46c6726bc316e6bfb5bdae1411d189d2c765be9403\" network for pod \"ss2-0\": networkPlugin cni failed to set up pod \"ss2-0_statefulset-5460\" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available\n Warning FailedCreatePodSandBox 9m50s kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container \"ba9e0d647df9de0df8c2e48a6755b220a6b7b0348724611bd1e3cc1a4d0ee761\" network for pod \"ss2-0\": networkPlugin cni failed to set up pod \"ss2-0_statefulset-5460\" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available\n Warning FailedCreatePodSandBox 9m45s (x4 over 9m49s) kubelet (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container \"694dfd440a45c3c9a61d801be4bf8aaa0e05b85f598b705cc29aa24fc0476751\" network for pod \"ss2-0\": networkPlugin cni failed to set up pod \"ss2-0_statefulset-5460\" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available\n Normal SandboxChanged 5m (x169 over 9m58s) kubelet Pod sandbox changed, it will be killed and re-created.\n" Jan 30 22:56:21.905: INFO: Output of kubectl describe ss2-0: Name: ss2-0 Namespace: statefulset-5460 Priority: 0 Node: ip-172-20-46-143.sa-east-1.compute.internal/172.20.46.143 Start Time: Mon, 30 Jan 2023 22:46:20 +0000 Labels: baz=blah controller-revision-hash=ss2-57bbdd95cb foo=bar statefulset.kubernetes.io/pod-name=ss2-0 Annotations: <none> Status: Running IP: 172.20.39.57 IPs: IP: 172.20.39.57 Controlled By: StatefulSet/ss2 Containers: webserver: Container ID: docker://cc91185a67daccd827c46f47253eb0348f2b8ee65a1d16281b22f3d86ed6142b Image: k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 Image ID: docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 Port: <none> Host Port: <none> State: Running Started: Mon, 30 Jan 2023 22:56:12 +0000 Ready: True Restart Count: 0 Readiness: http-get http://:80/index.html delay=0s timeout=1s period=1s #success=1 #failure=1 Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-brj54 (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: kube-api-access-brj54: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: <nil> DownwardAPI: true QoS Class: BestEffort Node-Selectors: <none> Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 10m default-scheduler Successfully assigned statefulset-5460/ss2-0 to ip-172-20-46-143.sa-east-1.compute.internal Warning FailedCreatePodSandBox 10m kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "b8e46e4438a9ed5918111925eb948a7090a6de743af3052a0f8c2391d3d04d1d" network for pod "ss2-0": networkPlugin cni failed to set up pod "ss2-0_statefulset-5460" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Warning FailedCreatePodSandBox 9m57s kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "db3b79a169ad5619a32dee8f48660e28097116236cd9a9b45fd857c4511cdbad" network for pod "ss2-0": networkPlugin cni failed to set up pod "ss2-0_statefulset-5460" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Warning FailedCreatePodSandBox 9m57s kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "fbff9d01d6c1ec8e5dc2a6ef6b9ec297ba8d9c321f1adeb8a31c6866920e9c8a" network for pod "ss2-0": networkPlugin cni failed to set up pod "ss2-0_statefulset-5460" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Warning FailedCreatePodSandBox 9m55s kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "69708343fb06aa53d644928db35d1e1762743c360ed4f5eb1dde321027cb6dbf" network for pod "ss2-0": networkPlugin cni failed to set up pod "ss2-0_statefulset-5460" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Warning FailedCreatePodSandBox 9m54s kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "02ec8db55a7b77932ca18f738d8de162933704f713ebbae4458af5849481f1ad" network for pod "ss2-0": networkPlugin cni failed to set up pod "ss2-0_statefulset-5460" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Warning FailedCreatePodSandBox 9m53s kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "512924a72a4d7fecb88265362ded39c004e3b35f121424317d2c4e902dfa708e" network for pod "ss2-0": networkPlugin cni failed to set up pod "ss2-0_statefulset-5460" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Warning FailedCreatePodSandBox 9m52s kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "f8dfc358509584b5ed72acd4148ab325e52cf87fbd20e39db1e54cbec86ca091" network for pod "ss2-0": networkPlugin cni failed to set up pod "ss2-0_statefulset-5460" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Warning FailedCreatePodSandBox 9m51s kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "63c8ef6de05ce41db323ec46c6726bc316e6bfb5bdae1411d189d2c765be9403" network for pod "ss2-0": networkPlugin cni failed to set up pod "ss2-0_statefulset-5460" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Warning FailedCreatePodSandBox 9m50s kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "ba9e0d647df9de0df8c2e48a6755b220a6b7b0348724611bd1e3cc1a4d0ee761" network for pod "ss2-0": networkPlugin cni failed to set up pod "ss2-0_statefulset-5460" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Warning FailedCreatePodSandBox 9m45s (x4 over 9m49s) kubelet (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "694dfd440a45c3c9a61d801be4bf8aaa0e05b85f598b705cc29aa24fc0476751" network for pod "ss2-0": networkPlugin cni failed to set up pod "ss2-0_statefulset-5460" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Normal SandboxChanged 5m (x169 over 9m58s) kubelet Pod sandbox changed, it will be killed and re-created. Jan 30 22:56:21.905: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/f32618b1-a0ed-11ed-9994-daecb65d57ab/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-u2204-k23-ko26-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=statefulset-5460 logs ss2-0 --tail=100' Jan 30 22:56:22.553: INFO: stderr: "" Jan 30 22:56:22.553: INFO: stdout: "[Mon Jan 30 22:56:12.461580 2023] [mpm_event:notice] [pid 1:tid 140061183384424] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations\n[Mon Jan 30 22:56:12.467649 2023] [core:notice] [pid 1:tid 140061183384424] AH00094: Command line: 'httpd -D FOREGROUND'\n172.20.46.143 - - [30/Jan/2023:22:56:13 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.46.143 - - [30/Jan/2023:22:56:13 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.46.143 - - [30/Jan/2023:22:56:14 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.46.143 - - [30/Jan/2023:22:56:15 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.46.143 - - [30/Jan/2023:22:56:16 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.46.143 - - [30/Jan/2023:22:56:17 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.46.143 - - [30/Jan/2023:22:56:18 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.46.143 - - [30/Jan/2023:22:56:19 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.46.143 - - [30/Jan/2023:22:56:20 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.46.143 - - [30/Jan/2023:22:56:21 +0000] \"GET /index.html HTTP/1.1\" 200 45\n" Jan 30 22:56:22.553: INFO: Last 100 log lines of ss2-0: [Mon Jan 30 22:56:12.461580 2023] [mpm_event:notice] [pid 1:tid 140061183384424] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations [Mon Jan 30 22:56:12.467649 2023] [core:notice] [pid 1:tid 140061183384424] AH00094: Command line: 'httpd -D FOREGROUND' 172.20.46.143 - - [30/Jan/2023:22:56:13 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.46.143 - - [30/Jan/2023:22:56:13 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.46.143 - - [30/Jan/2023:22:56:14 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.46.143 - - [30/Jan/2023:22:56:15 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.46.143 - - [30/Jan/2023:22:56:16 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.46.143 - - [30/Jan/2023:22:56:17 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.46.143 - - [30/Jan/2023:22:56:18 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.46.143 - - [30/Jan/2023:22:56:19 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.46.143 - - [30/Jan/2023:22:56:20 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.46.143 - - [30/Jan/2023:22:56:21 +0000] "GET /index.html HTTP/1.1" 200 45 Jan 30 22:56:22.553: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/f32618b1-a0ed-11ed-9994-daecb65d57ab/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-u2204-k23-ko26-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=statefulset-5460 describe po ss2-1' Jan 30 22:56:23.345: INFO: stderr: "" Jan 30 22:56:23.345: INFO: stdout: "Name: ss2-1\nNamespace: statefulset-5460\nPriority: 0\nNode: ip-172-20-37-244.sa-east-1.compute.internal/172.20.37.244\nStart Time: Mon, 30 Jan 2023 22:43:37 +0000\nLabels: baz=blah\n controller-revision-hash=ss2-57bbdd95cb\n foo=bar\n statefulset.kubernetes.io/pod-name=ss2-1\nAnnotations: <none>\nStatus: Running\nIP: 172.20.46.4\nIPs:\n IP: 172.20.46.4\nControlled By: StatefulSet/ss2\nContainers:\n webserver:\n Container ID: docker://45e6f7284237df4209fcf1ff0d3240e18bc6cf1023e692cdc5941bcf4ba4de36\n Image: k8s.gcr.io/e2e-test-images/httpd:2.4.38-2\n Image ID: docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3\n Port: <none>\n Host Port: <none>\n State: Running\n Started: Mon, 30 Jan 2023 22:43:54 +0000\n Ready: True\n Restart Count: 0\n Readiness: http-get http://:80/index.html delay=0s timeout=1s period=1s #success=1 #failure=1\n Environment: <none>\n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-g9h2q (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n kube-api-access-g9h2q:\n Type: Projected (a volume that contains injected data from multiple sources)\n TokenExpirationSeconds: 3607\n ConfigMapName: kube-root-ca.crt\n ConfigMapOptional: <nil>\n DownwardAPI: true\nQoS Class: BestEffort\nNode-Selectors: <none>\nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 12m default-scheduler Successfully assigned statefulset-5460/ss2-1 to ip-172-20-37-244.sa-east-1.compute.internal\n Normal Pulling 12m kubelet Pulling image \"k8s.gcr.io/e2e-test-images/httpd:2.4.38-2\"\n Normal Pulled 12m kubelet Successfully pulled image \"k8s.gcr.io/e2e-test-images/httpd:2.4.38-2\" in 13.898120235s (15.589660611s including waiting)\n Normal Created 12m kubelet Created container webserver\n Normal Started 12m kubelet Started container webserver\n" Jan 30 22:56:23.345: INFO: Output of kubectl describe ss2-1: Name: ss2-1 Namespace: statefulset-5460 Priority: 0 Node: ip-172-20-37-244.sa-east-1.compute.internal/172.20.37.244 Start Time: Mon, 30 Jan 2023 22:43:37 +0000 Labels: baz=blah controller-revision-hash=ss2-57bbdd95cb foo=bar statefulset.kubernetes.io/pod-name=ss2-1 Annotations: <none> Status: Running IP: 172.20.46.4 IPs: IP: 172.20.46.4 Controlled By: StatefulSet/ss2 Containers: webserver: Container ID: docker://45e6f7284237df4209fcf1ff0d3240e18bc6cf1023e692cdc5941bcf4ba4de36 Image: k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 Image ID: docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 Port: <none> Host Port: <none> State: Running Started: Mon, 30 Jan 2023 22:43:54 +0000 Ready: True Restart Count: 0 Readiness: http-get http://:80/index.html delay=0s timeout=1s period=1s #success=1 #failure=1 Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-g9h2q (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: kube-api-access-g9h2q: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: <nil> DownwardAPI: true QoS Class: BestEffort Node-Selectors: <none> Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 12m default-scheduler Successfully assigned statefulset-5460/ss2-1 to ip-172-20-37-244.sa-east-1.compute.internal Normal Pulling 12m kubelet Pulling image "k8s.gcr.io/e2e-test-images/httpd:2.4.38-2" Normal Pulled 12m kubelet Successfully pulled image "k8s.gcr.io/e2e-test-images/httpd:2.4.38-2" in 13.898120235s (15.589660611s including waiting) Normal Created 12m kubelet Created container webserver Normal Started 12m kubelet Started container webserver Jan 30 22:56:23.345: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/f32618b1-a0ed-11ed-9994-daecb65d57ab/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-u2204-k23-ko26-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=statefulset-5460 logs ss2-1 --tail=100' Jan 30 22:56:24.018: INFO: stderr: "" Jan 30 22:56:24.018: INFO: stdout: "172.20.37.244 - - [30/Jan/2023:22:54:44 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.244 - - [30/Jan/2023:22:54:45 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.244 - - [30/Jan/2023:22:54:46 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.244 - - [30/Jan/2023:22:54:47 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.244 - - [30/Jan/2023:22:54:48 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.244 - - [30/Jan/2023:22:54:49 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.244 - - [30/Jan/2023:22:54:50 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.244 - - [30/Jan/2023:22:54:51 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.244 - - [30/Jan/2023:22:54:52 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.244 - - [30/Jan/2023:22:54:53 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.244 - - [30/Jan/2023:22:54:54 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.244 - - [30/Jan/2023:22:54:55 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.244 - - [30/Jan/2023:22:54:56 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.244 - - [30/Jan/2023:22:54:57 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.244 - - [30/Jan/2023:22:54:58 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.244 - - [30/Jan/2023:22:54:59 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.244 - - [30/Jan/2023:22:55:00 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.244 - - [30/Jan/2023:22:55:01 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.244 - - [30/Jan/2023:22:55:02 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.244 - - [30/Jan/2023:22:55:03 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.244 - - [30/Jan/2023:22:55:04 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.244 - - [30/Jan/2023:22:55:05 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.244 - - [30/Jan/2023:22:55:06 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.244 - - [30/Jan/2023:22:55:07 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.244 - - [30/Jan/2023:22:55:08 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.244 - - [30/Jan/2023:22:55:09 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.244 - - [30/Jan/2023:22:55:10 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.244 - - [30/Jan/2023:22:55:11 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.244 - - [30/Jan/2023:22:55:12 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.244 - - [30/Jan/2023:22:55:13 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.244 - - [30/Jan/2023:22:55:14 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.244 - - [30/Jan/2023:22:55:15 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.244 - - [30/Jan/2023:22:55:16 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.244 - - [30/Jan/2023:22:55:17 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.244 - - [30/Jan/2023:22:55:18 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.244 - - [30/Jan/2023:22:55:19 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.244 - - [30/Jan/2023:22:55:20 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.244 - - [30/Jan/2023:22:55:21 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.244 - - [30/Jan/2023:22:55:22 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.244 - - [30/Jan/2023:22:55:23 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.244 - - [30/Jan/2023:22:55:24 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.244 - - [30/Jan/2023:22:55:25 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.244 - - [30/Jan/2023:22:55:26 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.244 - - [30/Jan/2023:22:55:27 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.244 - - [30/Jan/2023:22:55:28 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.244 - - [30/Jan/2023:22:55:29 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.244 - - [30/Jan/2023:22:55:30 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.244 - - [30/Jan/2023:22:55:31 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.244 - - [30/Jan/2023:22:55:32 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.244 - - [30/Jan/2023:22:55:33 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.244 - - [30/Jan/2023:22:55:34 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.244 - - [30/Jan/2023:22:55:35 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.244 - - [30/Jan/2023:22:55:36 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.244 - - [30/Jan/2023:22:55:37 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.244 - - [30/Jan/2023:22:55:38 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.244 - - [30/Jan/2023:22:55:39 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.244 - - [30/Jan/2023:22:55:40 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.244 - - [30/Jan/2023:22:55:41 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.244 - - [30/Jan/2023:22:55:42 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.244 - - [30/Jan/2023:22:55:43 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.244 - - [30/Jan/2023:22:55:44 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.244 - - [30/Jan/2023:22:55:45 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.244 - - [30/Jan/2023:22:55:46 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.244 - - [30/Jan/2023:22:55:47 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.244 - - [30/Jan/2023:22:55:48 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.244 - - [30/Jan/2023:22:55:49 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.244 - - [30/Jan/2023:22:55:50 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.244 - - [30/Jan/2023:22:55:51 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.244 - - [30/Jan/2023:22:55:52 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.244 - - [30/Jan/2023:22:55:53 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.244 - - [30/Jan/2023:22:55:54 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.244 - - [30/Jan/2023:22:55:55 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.244 - - [30/Jan/2023:22:55:56 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.244 - - [30/Jan/2023:22:55:57 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.244 - - [30/Jan/2023:22:55:58 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.244 - - [30/Jan/2023:22:55:59 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.244 - - [30/Jan/2023:22:56:00 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.244 - - [30/Jan/2023:22:56:01 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.244 - - [30/Jan/2023:22:56:02 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.244 - - [30/Jan/2023:22:56:03 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.244 - - [30/Jan/2023:22:56:04 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.244 - - [30/Jan/2023:22:56:05 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.244 - - [30/Jan/2023:22:56:06 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.244 - - [30/Jan/2023:22:56:07 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.244 - - [30/Jan/2023:22:56:08 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.244 - - [30/Jan/2023:22:56:09 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.244 - - [30/Jan/2023:22:56:10 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.244 - - [30/Jan/2023:22:56:11 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.244 - - [30/Jan/2023:22:56:12 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.244 - - [30/Jan/2023:22:56:13 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.244 - - [30/Jan/2023:22:56:14 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.244 - - [30/Jan/2023:22:56:15 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.244 - - [30/Jan/2023:22:56:16 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.244 - - [30/Jan/2023:22:56:17 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.244 - - [30/Jan/2023:22:56:18 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.244 - - [30/Jan/2023:22:56:19 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.244 - - [30/Jan/2023:22:56:20 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.244 - - [30/Jan/2023:22:56:21 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.244 - - [30/Jan/2023:22:56:22 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.244 - - [30/Jan/2023:22:56:23 +0000] \"GET /index.html HTTP/1.1\" 200 45\n" Jan 30 22:56:24.018: INFO: Last 100 log lines of ss2-1: 172.20.37.244 - - [30/Jan/2023:22:54:44 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.244 - - [30/Jan/2023:22:54:45 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.244 - - [30/Jan/2023:22:54:46 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.244 - - [30/Jan/2023:22:54:47 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.244 - - [30/Jan/2023:22:54:48 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.244 - - [30/Jan/2023:22:54:49 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.244 - - [30/Jan/2023:22:54:50 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.244 - - [30/Jan/2023:22:54:51 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.244 - - [30/Jan/2023:22:54:52 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.244 - - [30/Jan/2023:22:54:53 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.244 - - [30/Jan/2023:22:54:54 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.244 - - [30/Jan/2023:22:54:55 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.244 - - [30/Jan/2023:22:54:56 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.244 - - [30/Jan/2023:22:54:57 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.244 - - [30/Jan/2023:22:54:58 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.244 - - [30/Jan/2023:22:54:59 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.244 - - [30/Jan/2023:22:55:00 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.244 - - [30/Jan/2023:22:55:01 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.244 - - [30/Jan/2023:22:55:02 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.244 - - [30/Jan/2023:22:55:03 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.244 - - [30/Jan/2023:22:55:04 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.244 - - [30/Jan/2023:22:55:05 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.244 - - [30/Jan/2023:22:55:06 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.244 - - [30/Jan/2023:22:55:07 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.244 - - [30/Jan/2023:22:55:08 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.244 - - [30/Jan/2023:22:55:09 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.244 - - [30/Jan/2023:22:55:10 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.244 - - [30/Jan/2023:22:55:11 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.244 - - [30/Jan/2023:22:55:12 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.244 - - [30/Jan/2023:22:55:13 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.244 - - [30/Jan/2023:22:55:14 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.244 - - [30/Jan/2023:22:55:15 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.244 - - [30/Jan/2023:22:55:16 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.244 - - [30/Jan/2023:22:55:17 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.244 - - [30/Jan/2023:22:55:18 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.244 - - [30/Jan/2023:22:55:19 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.244 - - [30/Jan/2023:22:55:20 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.244 - - [30/Jan/2023:22:55:21 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.244 - - [30/Jan/2023:22:55:22 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.244 - - [30/Jan/2023:22:55:23 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.244 - - [30/Jan/2023:22:55:24 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.244 - - [30/Jan/2023:22:55:25 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.244 - - [30/Jan/2023:22:55:26 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.244 - - [30/Jan/2023:22:55:27 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.244 - - [30/Jan/2023:22:55:28 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.244 - - [30/Jan/2023:22:55:29 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.244 - - [30/Jan/2023:22:55:30 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.244 - - [30/Jan/2023:22:55:31 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.244 - - [30/Jan/2023:22:55:32 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.244 - - [30/Jan/2023:22:55:33 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.244 - - [30/Jan/2023:22:55:34 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.244 - - [30/Jan/2023:22:55:35 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.244 - - [30/Jan/2023:22:55:36 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.244 - - [30/Jan/2023:22:55:37 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.244 - - [30/Jan/2023:22:55:38 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.244 - - [30/Jan/2023:22:55:39 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.244 - - [30/Jan/2023:22:55:40 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.244 - - [30/Jan/2023:22:55:41 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.244 - - [30/Jan/2023:22:55:42 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.244 - - [30/Jan/2023:22:55:43 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.244 - - [30/Jan/2023:22:55:44 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.244 - - [30/Jan/2023:22:55:45 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.244 - - [30/Jan/2023:22:55:46 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.244 - - [30/Jan/2023:22:55:47 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.244 - - [30/Jan/2023:22:55:48 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.244 - - [30/Jan/2023:22:55:49 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.244 - - [30/Jan/2023:22:55:50 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.244 - - [30/Jan/2023:22:55:51 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.244 - - [30/Jan/2023:22:55:52 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.244 - - [30/Jan/2023:22:55:53 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.244 - - [30/Jan/2023:22:55:54 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.244 - - [30/Jan/2023:22:55:55 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.244 - - [30/Jan/2023:22:55:56 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.244 - - [30/Jan/2023:22:55:57 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.244 - - [30/Jan/2023:22:55:58 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.244 - - [30/Jan/2023:22:55:59 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.244 - - [30/Jan/2023:22:56:00 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.244 - - [30/Jan/2023:22:56:01 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.244 - - [30/Jan/2023:22:56:02 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.244 - - [30/Jan/2023:22:56:03 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.244 - - [30/Jan/2023:22:56:04 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.244 - - [30/Jan/2023:22:56:05 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.244 - - [30/Jan/2023:22:56:06 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.244 - - [30/Jan/2023:22:56:07 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.244 - - [30/Jan/2023:22:56:08 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.244 - - [30/Jan/2023:22:56:09 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.244 - - [30/Jan/2023:22:56:10 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.244 - - [30/Jan/2023:22:56:11 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.244 - - [30/Jan/2023:22:56:12 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.244 - - [30/Jan/2023:22:56:13 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.244 - - [30/Jan/2023:22:56:14 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.244 - - [30/Jan/2023:22:56:15 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.244 - - [30/Jan/2023:22:56:16 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.244 - - [30/Jan/2023:22:56:17 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.244 - - [30/Jan/2023:22:56:18 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.244 - - [30/Jan/2023:22:56:19 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.244 - - [30/Jan/2023:22:56:20 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.244 - - [30/Jan/2023:22:56:21 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.244 - - [30/Jan/2023:22:56:22 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.244 - - [30/Jan/2023:22:56:23 +0000] "GET /index.html HTTP/1.1" 200 45 Jan 30 22:56:24.018: INFO: Deleting all statefulset in ns statefulset-5460 Jan 30 22:56:24.160: INFO: Scaling statefulset ss2 to 0 Jan 30 22:56:54.734: INFO: Waiting for statefulset status.replicas updated to 0 Jan 30 22:56:54.875: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 �[1mSTEP�[0m: Collecting events from namespace "statefulset-5460". �[1mSTEP�[0m: Found 54 events. Jan 30 22:56:55.453: INFO: At 2023-01-30 22:43:07 +0000 UTC - event for ss2: {statefulset-controller } SuccessfulCreate: create Pod ss2-0 in StatefulSet ss2 successful Jan 30 22:56:55.453: INFO: At 2023-01-30 22:43:08 +0000 UTC - event for ss2-0: {kubelet ip-172-20-46-143.sa-east-1.compute.internal} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/httpd:2.4.38-2" Jan 30 22:56:55.453: INFO: At 2023-01-30 22:43:08 +0000 UTC - event for ss2-0: {default-scheduler } Scheduled: Successfully assigned statefulset-5460/ss2-0 to ip-172-20-46-143.sa-east-1.compute.internal Jan 30 22:56:55.453: INFO: At 2023-01-30 22:43:31 +0000 UTC - event for ss2-0: {kubelet ip-172-20-46-143.sa-east-1.compute.internal} Started: Started container webserver Jan 30 22:56:55.453: INFO: At 2023-01-30 22:43:31 +0000 UTC - event for ss2-0: {kubelet ip-172-20-46-143.sa-east-1.compute.internal} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/httpd:2.4.38-2" in 10.465343745s (22.598379572s including waiting) Jan 30 22:56:55.453: INFO: At 2023-01-30 22:43:31 +0000 UTC - event for ss2-0: {kubelet ip-172-20-46-143.sa-east-1.compute.internal} Created: Created container webserver Jan 30 22:56:55.453: INFO: At 2023-01-30 22:43:37 +0000 UTC - event for ss2: {statefulset-controller } SuccessfulCreate: create Pod ss2-1 in StatefulSet ss2 successful Jan 30 22:56:55.453: INFO: At 2023-01-30 22:43:37 +0000 UTC - event for ss2-1: {default-scheduler } Scheduled: Successfully assigned statefulset-5460/ss2-1 to ip-172-20-37-244.sa-east-1.compute.internal Jan 30 22:56:55.453: INFO: At 2023-01-30 22:43:38 +0000 UTC - event for ss2-1: {kubelet ip-172-20-37-244.sa-east-1.compute.internal} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/httpd:2.4.38-2" Jan 30 22:56:55.453: INFO: At 2023-01-30 22:43:54 +0000 UTC - event for ss2-1: {kubelet ip-172-20-37-244.sa-east-1.compute.internal} Created: Created container webserver Jan 30 22:56:55.453: INFO: At 2023-01-30 22:43:54 +0000 UTC - event for ss2-1: {kubelet ip-172-20-37-244.sa-east-1.compute.internal} Started: Started container webserver Jan 30 22:56:55.453: INFO: At 2023-01-30 22:43:54 +0000 UTC - event for ss2-1: {kubelet ip-172-20-37-244.sa-east-1.compute.internal} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/httpd:2.4.38-2" in 13.898120235s (15.589660611s including waiting) Jan 30 22:56:55.453: INFO: At 2023-01-30 22:43:55 +0000 UTC - event for ss2: {statefulset-controller } SuccessfulCreate: create Pod ss2-2 in StatefulSet ss2 successful Jan 30 22:56:55.453: INFO: At 2023-01-30 22:43:55 +0000 UTC - event for ss2-2: {default-scheduler } Scheduled: Successfully assigned statefulset-5460/ss2-2 to ip-172-20-63-7.sa-east-1.compute.internal Jan 30 22:56:55.453: INFO: At 2023-01-30 22:43:58 +0000 UTC - event for ss2-2: {kubelet ip-172-20-63-7.sa-east-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "e9e67f1818189330c0c06097e736ee77472ac63e189e2abfda740b4c67480e5c" network for pod "ss2-2": networkPlugin cni failed to set up pod "ss2-2_statefulset-5460" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 30 22:56:55.453: INFO: At 2023-01-30 22:44:01 +0000 UTC - event for ss2-2: {kubelet ip-172-20-63-7.sa-east-1.compute.internal} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 22:56:55.453: INFO: At 2023-01-30 22:44:02 +0000 UTC - event for ss2-2: {kubelet ip-172-20-63-7.sa-east-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "3381f4dee3f3eafc03c961b3267253febaa45b51b018189fd5087e6bd679f3ec" network for pod "ss2-2": networkPlugin cni failed to set up pod "ss2-2_statefulset-5460" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 30 22:56:55.453: INFO: At 2023-01-30 22:44:05 +0000 UTC - event for ss2-2: {kubelet ip-172-20-63-7.sa-east-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "ba0424976b105ec5e6796a26c399b8368c731edc4ea382f7912f87b832d1dc23" network for pod "ss2-2": networkPlugin cni failed to set up pod "ss2-2_statefulset-5460" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 30 22:56:55.453: INFO: At 2023-01-30 22:44:09 +0000 UTC - event for ss2-2: {kubelet ip-172-20-63-7.sa-east-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "3b388bc3ea5a7a0bfabd40869f470431980cc6ba1e5f34cff8c16f1bfd9ee4ca" network for pod "ss2-2": networkPlugin cni failed to set up pod "ss2-2_statefulset-5460" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 30 22:56:55.453: INFO: At 2023-01-30 22:44:12 +0000 UTC - event for ss2-2: {kubelet ip-172-20-63-7.sa-east-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "4be4ca4c6c3144d81244953598d87272910017651ebc31f91eacc752102c15c1" network for pod "ss2-2": networkPlugin cni failed to set up pod "ss2-2_statefulset-5460" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 30 22:56:55.453: INFO: At 2023-01-30 22:44:16 +0000 UTC - event for ss2-2: {kubelet ip-172-20-63-7.sa-east-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "885bfc077030d0f9265411c549486078659d7ed65e41e433526c1c230dd48b05" network for pod "ss2-2": networkPlugin cni failed to set up pod "ss2-2_statefulset-5460" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 30 22:56:55.453: INFO: At 2023-01-30 22:44:22 +0000 UTC - event for ss2-2: {kubelet ip-172-20-63-7.sa-east-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "e46ae61af950c64bd4e99bc4c4eb49cab8672b8f097ad531b702a2c673036d80" network for pod "ss2-2": networkPlugin cni failed to set up pod "ss2-2_statefulset-5460" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 30 22:56:55.453: INFO: At 2023-01-30 22:44:28 +0000 UTC - event for ss2-2: {kubelet ip-172-20-63-7.sa-east-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "f9476a96dd484fe0d02193bb21f52518eeb22cd3877455d013baa489711f6963" network for pod "ss2-2": networkPlugin cni failed to set up pod "ss2-2_statefulset-5460" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 30 22:56:55.453: INFO: At 2023-01-30 22:44:32 +0000 UTC - event for ss2-2: {kubelet ip-172-20-63-7.sa-east-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "326b98aa6cc553c5541e640ad4a4d9bce2cfaa1e40046949bbaf3fe5be8379a4" network for pod "ss2-2": networkPlugin cni failed to set up pod "ss2-2_statefulset-5460" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 30 22:56:55.453: INFO: At 2023-01-30 22:45:01 +0000 UTC - event for ss2-2: {kubelet ip-172-20-63-7.sa-east-1.compute.internal} FailedCreatePodSandBox: (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "06c7608e96f63b6ec5b3b28bd698b48367318a5ec1f566c6670abd1394df587d" network for pod "ss2-2": networkPlugin cni failed to set up pod "ss2-2_statefulset-5460" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 30 22:56:55.453: INFO: At 2023-01-30 22:45:59 +0000 UTC - event for ss2: {statefulset-controller } SuccessfulDelete: delete Pod ss2-2 in StatefulSet ss2 successful Jan 30 22:56:55.453: INFO: At 2023-01-30 22:46:12 +0000 UTC - event for ss2-2: {default-scheduler } Scheduled: Successfully assigned statefulset-5460/ss2-2 to ip-172-20-63-7.sa-east-1.compute.internal Jan 30 22:56:55.453: INFO: At 2023-01-30 22:46:16 +0000 UTC - event for ss2-2: {kubelet ip-172-20-63-7.sa-east-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "4f546b40c1063fd3b69b65ec67fe0bd54872a4891c8649ebf46e2b4faddbaa22" network for pod "ss2-2": networkPlugin cni failed to set up pod "ss2-2_statefulset-5460" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 30 22:56:55.453: INFO: At 2023-01-30 22:46:17 +0000 UTC - event for ss2-2: {kubelet ip-172-20-63-7.sa-east-1.compute.internal} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 22:56:55.453: INFO: At 2023-01-30 22:46:20 +0000 UTC - event for ss2-0: {kubelet ip-172-20-46-143.sa-east-1.compute.internal} Killing: Stopping container webserver Jan 30 22:56:55.453: INFO: At 2023-01-30 22:46:20 +0000 UTC - event for ss2-0: {default-scheduler } Scheduled: Successfully assigned statefulset-5460/ss2-0 to ip-172-20-46-143.sa-east-1.compute.internal Jan 30 22:56:55.453: INFO: At 2023-01-30 22:46:20 +0000 UTC - event for test: {endpoint-controller } FailedToUpdateEndpoint: Failed to update endpoint statefulset-5460/test: Operation cannot be fulfilled on endpoints "test": the object has been modified; please apply your changes to the latest version and try again Jan 30 22:56:55.453: INFO: At 2023-01-30 22:46:21 +0000 UTC - event for ss2-0: {kubelet ip-172-20-46-143.sa-east-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "b8e46e4438a9ed5918111925eb948a7090a6de743af3052a0f8c2391d3d04d1d" network for pod "ss2-0": networkPlugin cni failed to set up pod "ss2-0_statefulset-5460" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 30 22:56:55.453: INFO: At 2023-01-30 22:46:23 +0000 UTC - event for ss2-0: {kubelet ip-172-20-46-143.sa-east-1.compute.internal} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 22:56:55.453: INFO: At 2023-01-30 22:46:24 +0000 UTC - event for ss2-0: {kubelet ip-172-20-46-143.sa-east-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "db3b79a169ad5619a32dee8f48660e28097116236cd9a9b45fd857c4511cdbad" network for pod "ss2-0": networkPlugin cni failed to set up pod "ss2-0_statefulset-5460" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 30 22:56:55.453: INFO: At 2023-01-30 22:46:24 +0000 UTC - event for ss2-0: {kubelet ip-172-20-46-143.sa-east-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "fbff9d01d6c1ec8e5dc2a6ef6b9ec297ba8d9c321f1adeb8a31c6866920e9c8a" network for pod "ss2-0": networkPlugin cni failed to set up pod "ss2-0_statefulset-5460" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 30 22:56:55.453: INFO: At 2023-01-30 22:46:26 +0000 UTC - event for ss2-0: {kubelet ip-172-20-46-143.sa-east-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "69708343fb06aa53d644928db35d1e1762743c360ed4f5eb1dde321027cb6dbf" network for pod "ss2-0": networkPlugin cni failed to set up pod "ss2-0_statefulset-5460" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 30 22:56:55.453: INFO: At 2023-01-30 22:46:27 +0000 UTC - event for ss2-0: {kubelet ip-172-20-46-143.sa-east-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "02ec8db55a7b77932ca18f738d8de162933704f713ebbae4458af5849481f1ad" network for pod "ss2-0": networkPlugin cni failed to set up pod "ss2-0_statefulset-5460" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 30 22:56:55.453: INFO: At 2023-01-30 22:46:28 +0000 UTC - event for ss2-0: {kubelet ip-172-20-46-143.sa-east-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "512924a72a4d7fecb88265362ded39c004e3b35f121424317d2c4e902dfa708e" network for pod "ss2-0": networkPlugin cni failed to set up pod "ss2-0_statefulset-5460" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 30 22:56:55.453: INFO: At 2023-01-30 22:46:29 +0000 UTC - event for ss2-0: {kubelet ip-172-20-46-143.sa-east-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "f8dfc358509584b5ed72acd4148ab325e52cf87fbd20e39db1e54cbec86ca091" network for pod "ss2-0": networkPlugin cni failed to set up pod "ss2-0_statefulset-5460" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 30 22:56:55.453: INFO: At 2023-01-30 22:46:30 +0000 UTC - event for ss2-0: {kubelet ip-172-20-46-143.sa-east-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "63c8ef6de05ce41db323ec46c6726bc316e6bfb5bdae1411d189d2c765be9403" network for pod "ss2-0": networkPlugin cni failed to set up pod "ss2-0_statefulset-5460" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 30 22:56:55.454: INFO: At 2023-01-30 22:46:31 +0000 UTC - event for ss2-0: {kubelet ip-172-20-46-143.sa-east-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "ba9e0d647df9de0df8c2e48a6755b220a6b7b0348724611bd1e3cc1a4d0ee761" network for pod "ss2-0": networkPlugin cni failed to set up pod "ss2-0_statefulset-5460" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 30 22:56:55.454: INFO: At 2023-01-30 22:46:32 +0000 UTC - event for ss2-0: {kubelet ip-172-20-46-143.sa-east-1.compute.internal} FailedCreatePodSandBox: (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "694dfd440a45c3c9a61d801be4bf8aaa0e05b85f598b705cc29aa24fc0476751" network for pod "ss2-0": networkPlugin cni failed to set up pod "ss2-0_statefulset-5460" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 30 22:56:55.454: INFO: At 2023-01-30 22:56:21 +0000 UTC - event for ss2-2: {default-scheduler } Scheduled: Successfully assigned statefulset-5460/ss2-2 to ip-172-20-63-7.sa-east-1.compute.internal Jan 30 22:56:55.454: INFO: At 2023-01-30 22:56:22 +0000 UTC - event for ss2-2: {kubelet ip-172-20-63-7.sa-east-1.compute.internal} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/httpd:2.4.39-2" Jan 30 22:56:55.454: INFO: At 2023-01-30 22:56:33 +0000 UTC - event for ss2-2: {kubelet ip-172-20-63-7.sa-east-1.compute.internal} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/httpd:2.4.39-2" in 10.547044264s (10.547048912s including waiting) Jan 30 22:56:55.454: INFO: At 2023-01-30 22:56:33 +0000 UTC - event for ss2-2: {kubelet ip-172-20-63-7.sa-east-1.compute.internal} Started: Started container webserver Jan 30 22:56:55.454: INFO: At 2023-01-30 22:56:33 +0000 UTC - event for ss2-2: {kubelet ip-172-20-63-7.sa-east-1.compute.internal} Created: Created container webserver Jan 30 22:56:55.454: INFO: At 2023-01-30 22:56:34 +0000 UTC - event for ss2-2: {kubelet ip-172-20-63-7.sa-east-1.compute.internal} Killing: Stopping container webserver Jan 30 22:56:55.454: INFO: At 2023-01-30 22:56:46 +0000 UTC - event for ss2: {statefulset-controller } SuccessfulDelete: delete Pod ss2-1 in StatefulSet ss2 successful Jan 30 22:56:55.454: INFO: At 2023-01-30 22:56:46 +0000 UTC - event for ss2-1: {kubelet ip-172-20-37-244.sa-east-1.compute.internal} Unhealthy: Readiness probe failed: Get "http://172.20.46.4:80/index.html": dial tcp 172.20.46.4:80: connect: connection refused Jan 30 22:56:55.454: INFO: At 2023-01-30 22:56:46 +0000 UTC - event for ss2-1: {kubelet ip-172-20-37-244.sa-east-1.compute.internal} Killing: Stopping container webserver Jan 30 22:56:55.454: INFO: At 2023-01-30 22:56:47 +0000 UTC - event for ss2: {statefulset-controller } SuccessfulDelete: delete Pod ss2-0 in StatefulSet ss2 successful Jan 30 22:56:55.454: INFO: At 2023-01-30 22:56:47 +0000 UTC - event for ss2-0: {kubelet ip-172-20-46-143.sa-east-1.compute.internal} Killing: Stopping container webserver Jan 30 22:56:55.595: INFO: POD NODE PHASE GRACE CONDITIONS Jan 30 22:56:55.595: INFO: Jan 30 22:56:55.739: INFO: Logging node info for node ip-172-20-37-244.sa-east-1.compute.internal Jan 30 22:56:55.881: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-37-244.sa-east-1.compute.internal 1be0c21f-5cd5-49c3-937b-dcb7d30e890a 11572 0 2023-01-30 22:39:08 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:sa-east-1 failure-domain.beta.kubernetes.io/zone:sa-east-1a kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-37-244.sa-east-1.compute.internal kubernetes.io/os:linux kubernetes.io/role:node node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:sa-east-1a topology.hostpath.csi/node:ip-172-20-37-244.sa-east-1.compute.internal topology.kubernetes.io/region:sa-east-1 topology.kubernetes.io/zone:sa-east-1a] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-ephemeral-2903":"ip-172-20-37-244.sa-east-1.compute.internal","ebs.csi.aws.com":"i-02de6750f6f07da4c"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kops-controller Update v1 2023-01-30 22:39:08 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/node":{}}}} } {kube-controller-manager Update v1 2023-01-30 22:39:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.4.0/24\"":{}}}} } {kubelet Update v1 2023-01-30 22:39:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-30 22:56:16 +0000 UTC FieldsV1 {"f:status":{"f:volumesAttached":{}}} status} {kubelet Update v1 2023-01-30 22:56:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.4.0/24,DoNotUseExternalID:,ProviderID:aws:///sa-east-1a/i-02de6750f6f07da4c,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{49753808896 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4051644416 0} {<nil>} 3956684Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{44778427933 0} {<nil>} 44778427933 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3946786816 0} {<nil>} 3854284Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-30 22:56:21 +0000 UTC,LastTransitionTime:2023-01-30 22:39:08 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-30 22:56:21 +0000 UTC,LastTransitionTime:2023-01-30 22:39:08 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-30 22:56:21 +0000 UTC,LastTransitionTime:2023-01-30 22:39:08 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-30 22:56:21 +0000 UTC,LastTransitionTime:2023-01-30 22:39:28 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.37.244,},NodeAddress{Type:ExternalIP,Address:54.232.162.137,},NodeAddress{Type:Hostname,Address:ip-172-20-37-244.sa-east-1.compute.internal,},NodeAddress{Type:InternalDNS,Address:ip-172-20-37-244.sa-east-1.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-54-232-162-137.sa-east-1.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec2350fb0335a8c0068ce4bddeab7362,SystemUUID:ec2350fb-0335-a8c0-068c-e4bddeab7362,BootID:80522224-50f0-4d12-bc36-a8ad10d0e9d2,KernelVersion:5.15.0-1028-aws,OSImage:Ubuntu 22.04.1 LTS,ContainerRuntimeVersion:docker://20.10.17,KubeletVersion:v1.23.16,KubeProxyVersion:v1.23.16,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5 quay.io/cilium/cilium:v1.12.5],SizeBytes:456569528,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e k8s.gcr.io/e2e-test-images/agnhost:2.39],SizeBytes:126872991,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.23.16 registry.k8s.io/kube-proxy-amd64:v1.23.16],SizeBytes:110832791,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:f0c5de192d832e7c1daa6580d4a62e8fa6fc8eabc0917ae4cb7ed4d15e95b59e registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.1],SizeBytes:94471559,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:4e74c0492bceddc598de1c90cc5bc14dcda94cb49fa9c5bad9d117c4834b5e08 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.1],SizeBytes:56379269,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:36c31f7e1f433c9634d24f876353e8646246d81a03c4e351202c2644daff1620 k8s.gcr.io/sig-storage/csi-resizer:v1.2.0],SizeBytes:53956308,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-snapshotter@sha256:ed98431376c9e944e19a465fe8ea944806714dd95416a0821096c78d66b579bd k8s.gcr.io/sig-storage/csi-snapshotter:v4.1.1],SizeBytes:53531990,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:60ab9b3e6a030d3038c87c0d6bca2930f58d1d72823e6a4af09767dc83b696a2 k8s.gcr.io/sig-storage/csi-attacher:v3.2.1],SizeBytes:53499570,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:50249337,},ContainerImage{Names:[k8s.gcr.io/sig-storage/hostpathplugin@sha256:232fe80174d60d520d36043103853a1d7ab4b7f3782cf43e45034f04ccda58ce k8s.gcr.io/sig-storage/hostpathplugin:v1.7.1],SizeBytes:29977921,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:19582112,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:2dee3fe5fe861bb66c3a4ac51114f3447a4cd35870e0f2e2b558c7a400d89589 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0],SizeBytes:18699436,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:17611980,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:529be2c9770add0cdd0c989115222ea9fc1be430c11095eb9f6dafcf98a36e2b k8s.gcr.io/sig-storage/livenessprobe:v2.4.0],SizeBytes:17175488,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:1154361,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6 registry.k8s.io/pause:3.6],SizeBytes:682696,},},VolumesInUse:[kubernetes.io/csi/csi-hostpath-ephemeral-2903^4d0dc3eb-a0f1-11ed-a88d-2601e81cd4ec],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-ephemeral-2903^4d0dc3eb-a0f1-11ed-a88d-2601e81cd4ec,DevicePath:,},},Config:nil,},} Jan 30 22:56:55.882: INFO: Logging kubelet events for node ip-172-20-37-244.sa-east-1.compute.internal Jan 30 22:56:56.036: INFO: Logging pods the kubelet thinks is on node ip-172-20-37-244.sa-east-1.compute.internal Jan 30 22:56:56.325: INFO: dns-test-049f9690-c604-49d4-8d6d-e5987cdf31bb started at 2023-01-30 22:56:46 +0000 UTC (0+3 container statuses recorded) Jan 30 22:56:56.325: INFO: Container jessie-querier ready: false, restart count 0 Jan 30 22:56:56.325: INFO: Container querier ready: false, restart count 0 Jan 30 22:56:56.325: INFO: Container webserver ready: false, restart count 0 Jan 30 22:56:56.325: INFO: pod-e2e7e794-b570-4dac-a091-5118d49bc468 started at 2023-01-30 22:56:52 +0000 UTC (0+1 container statuses recorded) Jan 30 22:56:56.325: INFO: Container write-pod ready: true, restart count 0 Jan 30 22:56:56.325: INFO: netserver-0 started at 2023-01-30 22:56:37 +0000 UTC (0+1 container statuses recorded) Jan 30 22:56:56.325: INFO: Container webserver ready: false, restart count 0 Jan 30 22:56:56.325: INFO: ebs-csi-node-wwnfq started at 2023-01-30 22:39:09 +0000 UTC (0+3 container statuses recorded) Jan 30 22:56:56.325: INFO: Container ebs-plugin ready: true, restart count 0 Jan 30 22:56:56.325: INFO: Container liveness-probe ready: true, restart count 0 Jan 30 22:56:56.325: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 30 22:56:56.325: INFO: hostexec-ip-172-20-37-244.sa-east-1.compute.internal-kfvnw started at 2023-01-30 22:56:48 +0000 UTC (0+1 container statuses recorded) Jan 30 22:56:56.325: INFO: Container agnhost-container ready: true, restart count 0 Jan 30 22:56:56.325: INFO: ss-2 started at 2023-01-30 22:56:53 +0000 UTC (0+1 container statuses recorded) Jan 30 22:56:56.325: INFO: Container webserver ready: false, restart count 0 Jan 30 22:56:56.325: INFO: cilium-2kmmh started at 2023-01-30 22:39:09 +0000 UTC (1+1 container statuses recorded) Jan 30 22:56:56.325: INFO: Init container clean-cilium-state ready: true, restart count 0 Jan 30 22:56:56.325: INFO: Container cilium-agent ready: true, restart count 0 Jan 30 22:56:56.325: INFO: inline-volume-tester-cwp5l started at 2023-01-30 22:56:16 +0000 UTC (0+1 container statuses recorded) Jan 30 22:56:56.325: INFO: Container csi-volume-tester ready: true, restart count 0 Jan 30 22:56:56.325: INFO: csi-hostpathplugin-0 started at 2023-01-30 22:54:32 +0000 UTC (0+7 container statuses recorded) Jan 30 22:56:56.325: INFO: Container csi-attacher ready: true, restart count 0 Jan 30 22:56:56.325: INFO: Container csi-provisioner ready: true, restart count 0 Jan 30 22:56:56.325: INFO: Container csi-resizer ready: true, restart count 0 Jan 30 22:56:56.325: INFO: Container csi-snapshotter ready: true, restart count 0 Jan 30 22:56:56.325: INFO: Container hostpath ready: true, restart count 0 Jan 30 22:56:56.325: INFO: Container liveness-probe ready: true, restart count 0 Jan 30 22:56:56.325: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 30 22:56:56.325: INFO: hostexec-ip-172-20-37-244.sa-east-1.compute.internal-m7c8x started at 2023-01-30 22:56:51 +0000 UTC (0+1 container statuses recorded) Jan 30 22:56:56.325: INFO: Container agnhost-container ready: true, restart count 0 Jan 30 22:56:56.325: INFO: frontend-5c4f744f96-tkxjg started at 2023-01-30 22:56:51 +0000 UTC (0+1 container statuses recorded) Jan 30 22:56:56.325: INFO: Container guestbook-frontend ready: true, restart count 0 Jan 30 22:56:56.325: INFO: coredns-867df8f45c-q48mf started at 2023-01-30 22:39:48 +0000 UTC (0+1 container statuses recorded) Jan 30 22:56:56.325: INFO: Container coredns ready: true, restart count 0 Jan 30 22:56:56.787: INFO: Latency metrics for node ip-172-20-37-244.sa-east-1.compute.internal Jan 30 22:56:56.787: INFO: Logging node info for node ip-172-20-46-143.sa-east-1.compute.internal Jan 30 22:56:56.930: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-46-143.sa-east-1.compute.internal 4ac0f2fd-a06b-4650-9c4a-c2964727bf42 13051 0 2023-01-30 22:39:07 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:sa-east-1 failure-domain.beta.kubernetes.io/zone:sa-east-1a kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-46-143.sa-east-1.compute.internal kubernetes.io/os:linux kubernetes.io/role:node node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:sa-east-1a topology.hostpath.csi/node:ip-172-20-46-143.sa-east-1.compute.internal topology.kubernetes.io/region:sa-east-1 topology.kubernetes.io/zone:sa-east-1a] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-provisioning-2451":"ip-172-20-46-143.sa-east-1.compute.internal","csi-hostpath-provisioning-4781":"ip-172-20-46-143.sa-east-1.compute.internal","ebs.csi.aws.com":"i-0549a01609c77b117"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kops-controller Update v1 2023-01-30 22:39:07 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/node":{}}}} } {kube-controller-manager Update v1 2023-01-30 22:39:07 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.3.0/24\"":{}}}} } {kubelet Update v1 2023-01-30 22:39:07 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-30 22:56:10 +0000 UTC FieldsV1 {"f:status":{"f:volumesAttached":{}}} status} {kubelet Update v1 2023-01-30 22:56:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.3.0/24,DoNotUseExternalID:,ProviderID:aws:///sa-east-1a/i-0549a01609c77b117,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{49753808896 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4051652608 0} {<nil>} 3956692Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{44778427933 0} {<nil>} 44778427933 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3946795008 0} {<nil>} 3854292Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-30 22:56:53 +0000 UTC,LastTransitionTime:2023-01-30 22:39:07 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-30 22:56:53 +0000 UTC,LastTransitionTime:2023-01-30 22:39:07 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-30 22:56:53 +0000 UTC,LastTransitionTime:2023-01-30 22:39:07 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-30 22:56:53 +0000 UTC,LastTransitionTime:2023-01-30 22:39:28 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.46.143,},NodeAddress{Type:ExternalIP,Address:18.230.23.25,},NodeAddress{Type:Hostname,Address:ip-172-20-46-143.sa-east-1.compute.internal,},NodeAddress{Type:InternalDNS,Address:ip-172-20-46-143.sa-east-1.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-18-230-23-25.sa-east-1.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec25bac6007d23dab6609e76a6663500,SystemUUID:ec25bac6-007d-23da-b660-9e76a6663500,BootID:cd72b157-4d78-4df9-997f-bab559376690,KernelVersion:5.15.0-1028-aws,OSImage:Ubuntu 22.04.1 LTS,ContainerRuntimeVersion:docker://20.10.17,KubeletVersion:v1.23.16,KubeProxyVersion:v1.23.16,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5 quay.io/cilium/cilium:v1.12.5],SizeBytes:456569528,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:20f25f275d46aa728f7615a1ccc19c78b2ed89435bf943a44b339f70f45508e6 k8s.gcr.io/e2e-test-images/httpd:2.4.39-2],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e k8s.gcr.io/e2e-test-images/agnhost:2.39],SizeBytes:126872991,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.23.16 registry.k8s.io/kube-proxy-amd64:v1.23.16],SizeBytes:110832791,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:f0c5de192d832e7c1daa6580d4a62e8fa6fc8eabc0917ae4cb7ed4d15e95b59e registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.1],SizeBytes:94471559,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:60ab9b3e6a030d3038c87c0d6bca2930f58d1d72823e6a4af09767dc83b696a2 k8s.gcr.io/sig-storage/csi-attacher:v3.2.1],SizeBytes:53499570,},ContainerImage{Names:[k8s.gcr.io/sig-storage/hostpathplugin@sha256:232fe80174d60d520d36043103853a1d7ab4b7f3782cf43e45034f04ccda58ce k8s.gcr.io/sig-storage/hostpathplugin:v1.7.1],SizeBytes:29977921,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:19582112,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:2dee3fe5fe861bb66c3a4ac51114f3447a4cd35870e0f2e2b558c7a400d89589 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0],SizeBytes:18699436,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:17611980,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:529be2c9770add0cdd0c989115222ea9fc1be430c11095eb9f6dafcf98a36e2b k8s.gcr.io/sig-storage/livenessprobe:v2.4.0],SizeBytes:17175488,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 k8s.gcr.io/e2e-test-images/nginx:1.14-2],SizeBytes:16032814,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:1154361,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6 registry.k8s.io/pause:3.6],SizeBytes:682696,},},VolumesInUse:[kubernetes.io/csi/csi-hostpath-provisioning-2451^5d5f451a-a0f1-11ed-b62b-56cc64d2e821 kubernetes.io/csi/csi-hostpath-provisioning-4781^5f03ac5a-a0f1-11ed-8672-6e64e33b2a94 kubernetes.io/csi/ebs.csi.aws.com^vol-00d01ca9fd769afde],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-2451^5d5f451a-a0f1-11ed-b62b-56cc64d2e821,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-4781^5f03ac5a-a0f1-11ed-8672-6e64e33b2a94,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/ebs.csi.aws.com^vol-00d01ca9fd769afde,DevicePath:,},},Config:nil,},} Jan 30 22:56:56.930: INFO: Logging kubelet events for node ip-172-20-46-143.sa-east-1.compute.internal Jan 30 22:56:57.075: INFO: Logging pods the kubelet thinks is on node ip-172-20-46-143.sa-east-1.compute.internal Jan 30 22:56:57.224: INFO: agnhost-replica-6f5fb76474-thd7t started at 2023-01-30 22:56:53 +0000 UTC (0+1 container statuses recorded) Jan 30 22:56:57.224: INFO: Container replica ready: true, restart count 0 Jan 30 22:56:57.224: INFO: adopt-release-qqtpc started at 2023-01-30 22:49:55 +0000 UTC (0+1 container statuses recorded) Jan 30 22:56:57.224: INFO: Container c ready: true, restart count 0 Jan 30 22:56:57.224: INFO: adopt-release-rpjrs started at 2023-01-30 22:49:55 +0000 UTC (0+1 container statuses recorded) Jan 30 22:56:57.224: INFO: Container c ready: true, restart count 0 Jan 30 22:56:57.224: INFO: adopt-release-rzxqc started at 2023-01-30 22:56:21 +0000 UTC (0+1 container statuses recorded) Jan 30 22:56:57.224: INFO: Container c ready: false, restart count 0 Jan 30 22:56:57.224: INFO: frontend-5c4f744f96-2l5rg started at 2023-01-30 22:56:51 +0000 UTC (0+1 container statuses recorded) Jan 30 22:56:57.224: INFO: Container guestbook-frontend ready: true, restart count 0 Jan 30 22:56:57.224: INFO: csi-hostpathplugin-0 started at 2023-01-30 22:56:31 +0000 UTC (0+7 container statuses recorded) Jan 30 22:56:57.224: INFO: Container csi-attacher ready: false, restart count 0 Jan 30 22:56:57.224: INFO: Container csi-provisioner ready: false, restart count 0 Jan 30 22:56:57.224: INFO: Container csi-resizer ready: false, restart count 0 Jan 30 22:56:57.224: INFO: Container csi-snapshotter ready: false, restart count 0 Jan 30 22:56:57.224: INFO: Container hostpath ready: false, restart count 0 Jan 30 22:56:57.224: INFO: Container liveness-probe ready: false, restart count 0 Jan 30 22:56:57.224: INFO: Container node-driver-registrar ready: false, restart count 0 Jan 30 22:56:57.224: INFO: pod-subpath-test-dynamicpv-p54g started at 2023-01-30 22:56:44 +0000 UTC (1+1 container statuses recorded) Jan 30 22:56:57.224: INFO: Init container init-volume-dynamicpv-p54g ready: true, restart count 0 Jan 30 22:56:57.224: INFO: Container test-container-subpath-dynamicpv-p54g ready: false, restart count 0 Jan 30 22:56:57.224: INFO: pod-subpath-test-dynamicpv-qsqw started at 2023-01-30 22:56:47 +0000 UTC (2+1 container statuses recorded) Jan 30 22:56:57.224: INFO: Init container init-volume-dynamicpv-qsqw ready: true, restart count 0 Jan 30 22:56:57.224: INFO: Init container test-init-subpath-dynamicpv-qsqw ready: false, restart count 0 Jan 30 22:56:57.224: INFO: Container test-container-subpath-dynamicpv-qsqw ready: false, restart count 0 Jan 30 22:56:57.224: INFO: csi-mockplugin-0 started at <nil> (0+0 container statuses recorded) Jan 30 22:56:57.224: INFO: cilium-m624g started at 2023-01-30 22:39:08 +0000 UTC (1+1 container statuses recorded) Jan 30 22:56:57.224: INFO: Init container clean-cilium-state ready: true, restart count 0 Jan 30 22:56:57.224: INFO: Container cilium-agent ready: true, restart count 0 Jan 30 22:56:57.224: INFO: ebs-csi-node-qjvfh started at 2023-01-30 22:39:08 +0000 UTC (0+3 container statuses recorded) Jan 30 22:56:57.224: INFO: Container ebs-plugin ready: true, restart count 0 Jan 30 22:56:57.224: INFO: Container liveness-probe ready: true, restart count 0 Jan 30 22:56:57.224: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 30 22:56:57.224: INFO: ss-0 started at 2023-01-30 22:56:08 +0000 UTC (0+1 container statuses recorded) Jan 30 22:56:57.224: INFO: Container webserver ready: true, restart count 0 Jan 30 22:56:57.224: INFO: csi-hostpathplugin-0 started at 2023-01-30 22:54:22 +0000 UTC (0+7 container statuses recorded) Jan 30 22:56:57.224: INFO: Container csi-attacher ready: false, restart count 0 Jan 30 22:56:57.224: INFO: Container csi-provisioner ready: false, restart count 0 Jan 30 22:56:57.224: INFO: Container csi-resizer ready: false, restart count 0 Jan 30 22:56:57.224: INFO: Container csi-snapshotter ready: false, restart count 0 Jan 30 22:56:57.224: INFO: Container hostpath ready: false, restart count 0 Jan 30 22:56:57.224: INFO: Container liveness-probe ready: false, restart count 0 Jan 30 22:56:57.224: INFO: Container node-driver-registrar ready: false, restart count 0 Jan 30 22:56:57.224: INFO: csi-mockplugin-0 started at 2023-01-30 22:56:37 +0000 UTC (0+4 container statuses recorded) Jan 30 22:56:57.224: INFO: Container busybox ready: false, restart count 0 Jan 30 22:56:57.224: INFO: Container csi-provisioner ready: false, restart count 0 Jan 30 22:56:57.224: INFO: Container driver-registrar ready: false, restart count 0 Jan 30 22:56:57.224: INFO: Container mock ready: false, restart count 0 Jan 30 22:56:57.224: INFO: netserver-1 started at 2023-01-30 22:56:37 +0000 UTC (0+1 container statuses recorded) Jan 30 22:56:57.224: INFO: Container webserver ready: false, restart count 0 Jan 30 22:56:57.224: INFO: oidc-discovery-validator started at 2023-01-30 22:56:50 +0000 UTC (0+1 container statuses recorded) Jan 30 22:56:57.224: INFO: Container oidc-discovery-validator ready: false, restart count 0 Jan 30 22:56:57.696: INFO: Latency metrics for node ip-172-20-46-143.sa-east-1.compute.internal Jan 30 22:56:57.696: INFO: Logging node info for node ip-172-20-56-33.sa-east-1.compute.internal Jan 30 22:56:57.838: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-56-33.sa-east-1.compute.internal 954986f9-8a0c-45d3-a91c-b10fd929b91d 12488 0 2023-01-30 22:39:06 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:sa-east-1 failure-domain.beta.kubernetes.io/zone:sa-east-1a kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-56-33.sa-east-1.compute.internal kubernetes.io/os:linux kubernetes.io/role:node node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:sa-east-1a topology.hostpath.csi/node:ip-172-20-56-33.sa-east-1.compute.internal topology.kubernetes.io/region:sa-east-1 topology.kubernetes.io/zone:sa-east-1a] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-ephemeral-2870":"ip-172-20-56-33.sa-east-1.compute.internal","csi-hostpath-ephemeral-3305":"ip-172-20-56-33.sa-east-1.compute.internal","ebs.csi.aws.com":"i-09e0b8ffb97d8ede2"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kops-controller Update v1 2023-01-30 22:39:06 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/node":{}}}} } {kubelet Update v1 2023-01-30 22:39:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-30 22:39:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.1.0/24\"":{}}}} } {kube-controller-manager Update v1 2023-01-30 22:55:12 +0000 UTC FieldsV1 {"f:status":{"f:volumesAttached":{}}} status} {kubelet Update v1 2023-01-30 22:55:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.1.0/24,DoNotUseExternalID:,ProviderID:aws:///sa-east-1a/i-09e0b8ffb97d8ede2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{49753808896 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4051644416 0} {<nil>} 3956684Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{44778427933 0} {<nil>} 44778427933 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3946786816 0} {<nil>} 3854284Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-30 22:56:38 +0000 UTC,LastTransitionTime:2023-01-30 22:39:06 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-30 22:56:38 +0000 UTC,LastTransitionTime:2023-01-30 22:39:06 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-30 22:56:38 +0000 UTC,LastTransitionTime:2023-01-30 22:39:06 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-30 22:56:38 +0000 UTC,LastTransitionTime:2023-01-30 22:39:26 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.56.33,},NodeAddress{Type:ExternalIP,Address:54.233.226.185,},NodeAddress{Type:Hostname,Address:ip-172-20-56-33.sa-east-1.compute.internal,},NodeAddress{Type:InternalDNS,Address:ip-172-20-56-33.sa-east-1.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-54-233-226-185.sa-east-1.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec2d7bffa4e33f064a7a3db7aac73580,SystemUUID:ec2d7bff-a4e3-3f06-4a7a-3db7aac73580,BootID:749c0ee0-ccbf-48a5-9702-baf2673813b3,KernelVersion:5.15.0-1028-aws,OSImage:Ubuntu 22.04.1 LTS,ContainerRuntimeVersion:docker://20.10.17,KubeletVersion:v1.23.16,KubeProxyVersion:v1.23.16,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5 quay.io/cilium/cilium:v1.12.5],SizeBytes:456569528,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e k8s.gcr.io/e2e-test-images/agnhost:2.39],SizeBytes:126872991,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.23.16 registry.k8s.io/kube-proxy-amd64:v1.23.16],SizeBytes:110832791,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:f0c5de192d832e7c1daa6580d4a62e8fa6fc8eabc0917ae4cb7ed4d15e95b59e registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.1],SizeBytes:94471559,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:4e74c0492bceddc598de1c90cc5bc14dcda94cb49fa9c5bad9d117c4834b5e08 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.1],SizeBytes:56379269,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:36c31f7e1f433c9634d24f876353e8646246d81a03c4e351202c2644daff1620 k8s.gcr.io/sig-storage/csi-resizer:v1.2.0],SizeBytes:53956308,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-snapshotter@sha256:ed98431376c9e944e19a465fe8ea944806714dd95416a0821096c78d66b579bd k8s.gcr.io/sig-storage/csi-snapshotter:v4.1.1],SizeBytes:53531990,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:60ab9b3e6a030d3038c87c0d6bca2930f58d1d72823e6a4af09767dc83b696a2 k8s.gcr.io/sig-storage/csi-attacher:v3.2.1],SizeBytes:53499570,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:50249337,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:68d396900aeaa072c1f27289485fdac29834045a6f3ffe369bf389d830ef572d registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.6],SizeBytes:47910609,},ContainerImage{Names:[k8s.gcr.io/sig-storage/hostpathplugin@sha256:232fe80174d60d520d36043103853a1d7ab4b7f3782cf43e45034f04ccda58ce k8s.gcr.io/sig-storage/hostpathplugin:v1.7.1],SizeBytes:29977921,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:19582112,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:2dee3fe5fe861bb66c3a4ac51114f3447a4cd35870e0f2e2b558c7a400d89589 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0],SizeBytes:18699436,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:17611980,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:529be2c9770add0cdd0c989115222ea9fc1be430c11095eb9f6dafcf98a36e2b k8s.gcr.io/sig-storage/livenessprobe:v2.4.0],SizeBytes:17175488,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 k8s.gcr.io/e2e-test-images/nginx:1.14-2],SizeBytes:16032814,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:1154361,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6 registry.k8s.io/pause:3.6],SizeBytes:682696,},},VolumesInUse:[kubernetes.io/csi/csi-hostpath-ephemeral-2870^4c18e9cf-a0f1-11ed-92b5-ee38e4391dc5 kubernetes.io/csi/csi-hostpath-ephemeral-3305^26db7f9b-a0f1-11ed-826d-5eda5ff6d3ee kubernetes.io/csi/csi-hostpath-ephemeral-3305^546b5754-a0f1-11ed-826d-5eda5ff6d3ee],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-ephemeral-3305^26db7f9b-a0f1-11ed-826d-5eda5ff6d3ee,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-ephemeral-2870^4c18e9cf-a0f1-11ed-92b5-ee38e4391dc5,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-ephemeral-3305^546b5754-a0f1-11ed-826d-5eda5ff6d3ee,DevicePath:,},},Config:nil,},} Jan 30 22:56:57.838: INFO: Logging kubelet events for node ip-172-20-56-33.sa-east-1.compute.internal Jan 30 22:56:57.983: INFO: Logging pods the kubelet thinks is on node ip-172-20-56-33.sa-east-1.compute.internal Jan 30 22:56:58.130: INFO: netserver-2 started at 2023-01-30 22:56:38 +0000 UTC (0+1 container statuses recorded) Jan 30 22:56:58.130: INFO: Container webserver ready: false, restart count 0 Jan 30 22:56:58.130: INFO: csi-hostpathplugin-0 started at 2023-01-30 22:49:23 +0000 UTC (0+7 container statuses recorded) Jan 30 22:56:58.130: INFO: Container csi-attacher ready: true, restart count 0 Jan 30 22:56:58.130: INFO: Container csi-provisioner ready: true, restart count 0 Jan 30 22:56:58.130: INFO: Container csi-resizer ready: true, restart count 0 Jan 30 22:56:58.130: INFO: Container csi-snapshotter ready: true, restart count 0 Jan 30 22:56:58.130: INFO: Container hostpath ready: true, restart count 0 Jan 30 22:56:58.130: INFO: Container liveness-probe ready: true, restart count 0 Jan 30 22:56:58.130: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 30 22:56:58.130: INFO: csi-hostpathplugin-0 started at 2023-01-30 22:45:43 +0000 UTC (0+7 container statuses recorded) Jan 30 22:56:58.130: INFO: Container csi-attacher ready: true, restart count 0 Jan 30 22:56:58.130: INFO: Container csi-provisioner ready: true, restart count 0 Jan 30 22:56:58.130: INFO: Container csi-resizer ready: true, restart count 0 Jan 30 22:56:58.130: INFO: Container csi-snapshotter ready: true, restart count 0 Jan 30 22:56:58.130: INFO: Container hostpath ready: true, restart count 0 Jan 30 22:56:58.130: INFO: Container liveness-probe ready: true, restart count 0 Jan 30 22:56:58.130: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 30 22:56:58.130: INFO: cilium-rrh22 started at 2023-01-30 22:39:07 +0000 UTC (1+1 container statuses recorded) Jan 30 22:56:58.130: INFO: Init container clean-cilium-state ready: true, restart count 0 Jan 30 22:56:58.130: INFO: Container cilium-agent ready: true, restart count 0 Jan 30 22:56:58.130: INFO: agnhost-replica-6f5fb76474-xrz5f started at 2023-01-30 22:56:53 +0000 UTC (0+1 container statuses recorded) Jan 30 22:56:58.130: INFO: Container replica ready: true, restart count 0 Jan 30 22:56:58.130: INFO: ebs-csi-node-846kf started at 2023-01-30 22:39:07 +0000 UTC (0+3 container statuses recorded) Jan 30 22:56:58.130: INFO: Container ebs-plugin ready: true, restart count 0 Jan 30 22:56:58.130: INFO: Container liveness-probe ready: true, restart count 0 Jan 30 22:56:58.131: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 30 22:56:58.131: INFO: coredns-867df8f45c-txv2h started at 2023-01-30 22:39:26 +0000 UTC (0+1 container statuses recorded) Jan 30 22:56:58.131: INFO: Container coredns ready: true, restart count 0 Jan 30 22:56:58.131: INFO: inline-volume-tester2-d7wtb started at 2023-01-30 22:56:29 +0000 UTC (0+1 container statuses recorded) Jan 30 22:56:58.131: INFO: Container csi-volume-tester ready: true, restart count 0 Jan 30 22:56:58.131: INFO: inline-volume-tester-thvcx started at 2023-01-30 22:55:11 +0000 UTC (0+1 container statuses recorded) Jan 30 22:56:58.131: INFO: Container csi-volume-tester ready: true, restart count 0 Jan 30 22:56:58.131: INFO: coredns-autoscaler-557ccb4c66-vs6br started at 2023-01-30 22:39:26 +0000 UTC (0+1 container statuses recorded) Jan 30 22:56:58.131: INFO: Container autoscaler ready: true, restart count 0 Jan 30 22:56:58.131: INFO: inline-volume-tester-v5kjh started at 2023-01-30 22:56:15 +0000 UTC (0+1 container statuses recorded) Jan 30 22:56:58.131: INFO: Container csi-volume-tester ready: true, restart count 0 Jan 30 22:56:58.612: INFO: Latency metrics for node ip-172-20-56-33.sa-east-1.compute.internal Jan 30 22:56:58.612: INFO: Logging node info for node ip-172-20-63-44.sa-east-1.compute.internal Jan 30 22:56:58.757: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-63-44.sa-east-1.compute.internal f7fcefff-e13d-4383-8796-cdc02ac9be26 10331 0 2023-01-30 22:37:10 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:c5.large beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:sa-east-1 failure-domain.beta.kubernetes.io/zone:sa-east-1a kops.k8s.io/kops-controller-pki: kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-63-44.sa-east-1.compute.internal kubernetes.io/os:linux kubernetes.io/role:master node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers: node.kubernetes.io/instance-type:c5.large topology.ebs.csi.aws.com/zone:sa-east-1a topology.kubernetes.io/region:sa-east-1 topology.kubernetes.io/zone:sa-east-1a] map[csi.volume.kubernetes.io/nodeid:{"ebs.csi.aws.com":"i-020b2e4354e67a776"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-30 22:37:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {protokube Update v1 2023-01-30 22:37:31 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kops.k8s.io/kops-controller-pki":{},"f:node-role.kubernetes.io/control-plane":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}} } {kops-controller Update v1 2023-01-30 22:37:33 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/master":{}}}} } {kube-controller-manager Update v1 2023-01-30 22:37:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.0.0/24\"":{}},"f:taints":{}}} } {kubelet Update v1 2023-01-30 22:38:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.0.0/24,DoNotUseExternalID:,ProviderID:aws:///sa-east-1a/i-020b2e4354e67a776,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[100.96.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{49753808896 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3862913024 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{44778427933 0} {<nil>} 44778427933 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3758055424 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-30 22:54:43 +0000 UTC,LastTransitionTime:2023-01-30 22:37:10 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-30 22:54:43 +0000 UTC,LastTransitionTime:2023-01-30 22:37:10 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-30 22:54:43 +0000 UTC,LastTransitionTime:2023-01-30 22:37:10 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-30 22:54:43 +0000 UTC,LastTransitionTime:2023-01-30 22:37:51 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.63.44,},NodeAddress{Type:ExternalIP,Address:18.230.69.200,},NodeAddress{Type:Hostname,Address:ip-172-20-63-44.sa-east-1.compute.internal,},NodeAddress{Type:InternalDNS,Address:ip-172-20-63-44.sa-east-1.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-18-230-69-200.sa-east-1.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec2866a629da92bef6391329a4d3d367,SystemUUID:ec2866a6-29da-92be-f639-1329a4d3d367,BootID:a943fe41-4bc4-4772-98e1-0ba5a25bcb7f,KernelVersion:5.15.0-1028-aws,OSImage:Ubuntu 22.04.1 LTS,ContainerRuntimeVersion:docker://20.10.17,KubeletVersion:v1.23.16,KubeProxyVersion:v1.23.16,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/etcdadm/etcd-manager@sha256:66a453db625abb268f4b3bbefc5a34a171d81e6e8796cecca54cfd71775c77c4 registry.k8s.io/etcdadm/etcd-manager:v3.0.20221209],SizeBytes:662209237,},ContainerImage{Names:[quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5 quay.io/cilium/cilium:v1.12.5],SizeBytes:456569528,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver-amd64:v1.23.16 registry.k8s.io/kube-apiserver-amd64:v1.23.16],SizeBytes:129999849,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager-amd64:v1.23.16 registry.k8s.io/kube-controller-manager-amd64:v1.23.16],SizeBytes:119940367,},ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.23.16 registry.k8s.io/kube-proxy-amd64:v1.23.16],SizeBytes:110832791,},ContainerImage{Names:[quay.io/cilium/operator@sha256:a6d24a006a6b92967ac90786b49bc1ac26e5477cf028cd1186efcfc2466484db quay.io/cilium/operator:v1.12.5],SizeBytes:110016956,},ContainerImage{Names:[registry.k8s.io/kops/kops-controller:1.26.0-beta.2],SizeBytes:106139107,},ContainerImage{Names:[registry.k8s.io/kops/dns-controller:1.26.0-beta.2],SizeBytes:102637092,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:f0c5de192d832e7c1daa6580d4a62e8fa6fc8eabc0917ae4cb7ed4d15e95b59e registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.1],SizeBytes:94471559,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:122bfb8c1edabb3c0edd63f06523e6940d958d19b3957dc7b1d6f81e9f1f6119 registry.k8s.io/sig-storage/csi-provisioner:v3.1.0],SizeBytes:57736070,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:9ebbf9f023e7b41ccee3d52afe39a89e3ddacdbb69269d583abfc25847cfd9e4 registry.k8s.io/sig-storage/csi-resizer:v1.4.0],SizeBytes:55464778,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:8b9c313c05f54fb04f8d430896f5f5904b6cb157df261501b29adc04d2b2dc7b registry.k8s.io/sig-storage/csi-attacher:v3.4.0],SizeBytes:54848595,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler-amd64:v1.23.16 registry.k8s.io/kube-scheduler-amd64:v1.23.16],SizeBytes:51852546,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:19582112,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:17611980,},ContainerImage{Names:[registry.k8s.io/kops/kube-apiserver-healthcheck:1.26.0-beta.2],SizeBytes:8786911,},ContainerImage{Names:[registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause:3.6],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 30 22:56:58.758: INFO: Logging kubelet events for node ip-172-20-63-44.sa-east-1.compute.internal Jan 30 22:56:58.903: INFO: Logging pods the kubelet thinks is on node ip-172-20-63-44.sa-east-1.compute.internal Jan 30 22:56:59.053: INFO: kube-controller-manager-ip-172-20-63-44.sa-east-1.compute.internal started at 2023-01-30 22:36:20 +0000 UTC (0+1 container statuses recorded) Jan 30 22:56:59.053: INFO: Container kube-controller-manager ready: true, restart count 2 Jan 30 22:56:59.053: INFO: kops-controller-mrlzz started at 2023-01-30 22:37:31 +0000 UTC (0+1 container statuses recorded) Jan 30 22:56:59.053: INFO: Container kops-controller ready: true, restart count 0 Jan 30 22:56:59.053: INFO: dns-controller-58d7bbb845-vwkl6 started at 2023-01-30 22:37:32 +0000 UTC (0+1 container statuses recorded) Jan 30 22:56:59.053: INFO: Container dns-controller ready: true, restart count 0 Jan 30 22:56:59.053: INFO: cilium-operator-c7bfc9f44-bhw9j started at 2023-01-30 22:37:32 +0000 UTC (0+1 container statuses recorded) Jan 30 22:56:59.053: INFO: Container cilium-operator ready: true, restart count 0 Jan 30 22:56:59.053: INFO: ebs-csi-controller-6dbc9bb9b4-zt6h6 started at 2023-01-30 22:37:32 +0000 UTC (0+5 container statuses recorded) Jan 30 22:56:59.053: INFO: Container csi-attacher ready: true, restart count 0 Jan 30 22:56:59.053: INFO: Container csi-provisioner ready: true, restart count 0 Jan 30 22:56:59.053: INFO: Container csi-resizer ready: true, restart count 0 Jan 30 22:56:59.053: INFO: Container ebs-plugin ready: true, restart count 0 Jan 30 22:56:59.053: INFO: Container liveness-probe ready: true, restart count 0 Jan 30 22:56:59.053: INFO: kube-apiserver-ip-172-20-63-44.sa-east-1.compute.internal started at 2023-01-30 22:36:20 +0000 UTC (0+2 container statuses recorded) Jan 30 22:56:59.053: INFO: Container healthcheck ready: true, restart count 0 Jan 30 22:56:59.053: INFO: Container kube-apiserver ready: true, restart count 1 Jan 30 22:56:59.053: INFO: etcd-manager-main-ip-172-20-63-44.sa-east-1.compute.internal started at 2023-01-30 22:36:20 +0000 UTC (0+1 container statuses recorded) Jan 30 22:56:59.053: INFO: Container etcd-manager ready: true, restart count 0 Jan 30 22:56:59.053: INFO: kube-scheduler-ip-172-20-63-44.sa-east-1.compute.internal started at 2023-01-30 22:36:20 +0000 UTC (0+1 container statuses recorded) Jan 30 22:56:59.053: INFO: Container kube-scheduler ready: true, restart count 0 Jan 30 22:56:59.053: INFO: ebs-csi-node-crhx2 started at 2023-01-30 22:37:30 +0000 UTC (0+3 container statuses recorded) Jan 30 22:56:59.053: INFO: Container ebs-plugin ready: true, restart count 0 Jan 30 22:56:59.053: INFO: Container liveness-probe ready: true, restart count 0 Jan 30 22:56:59.053: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 30 22:56:59.053: INFO: cilium-bg2hw started at 2023-01-30 22:37:30 +0000 UTC (1+1 container statuses recorded) Jan 30 22:56:59.053: INFO: Init container clean-cilium-state ready: true, restart count 0 Jan 30 22:56:59.053: INFO: Container cilium-agent ready: true, restart count 0 Jan 30 22:56:59.053: INFO: etcd-manager-events-ip-172-20-63-44.sa-east-1.compute.internal started at 2023-01-30 22:36:20 +0000 UTC (0+1 container statuses recorded) Jan 30 22:56:59.053: INFO: Container etcd-manager ready: true, restart count 0 Jan 30 22:56:59.510: INFO: Latency metrics for node ip-172-20-63-44.sa-east-1.compute.internal Jan 30 22:56:59.510: INFO: Logging node info for node ip-172-20-63-7.sa-east-1.compute.internal Jan 30 22:56:59.652: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-63-7.sa-east-1.compute.internal 8ee09ce8-ad2c-4347-b6b0-a38439fe8b38 13235 0 2023-01-30 22:39:06 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:sa-east-1 failure-domain.beta.kubernetes.io/zone:sa-east-1a kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-63-7.sa-east-1.compute.internal kubernetes.io/os:linux kubernetes.io/role:node node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:sa-east-1a topology.hostpath.csi/node:ip-172-20-63-7.sa-east-1.compute.internal topology.kubernetes.io/region:sa-east-1 topology.kubernetes.io/zone:sa-east-1a] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-ephemeral-5132":"ip-172-20-63-7.sa-east-1.compute.internal","csi-mock-csi-mock-volumes-3826":"csi-mock-csi-mock-volumes-3826","ebs.csi.aws.com":"i-02d1af952f8cb9055"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kops-controller Update v1 2023-01-30 22:39:06 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/node":{}}}} } {kube-controller-manager Update v1 2023-01-30 22:39:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.2.0/24\"":{}}}} } {kubelet Update v1 2023-01-30 22:39:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-30 22:44:35 +0000 UTC FieldsV1 {"f:status":{"f:volumesAttached":{}}} status} {kubelet Update v1 2023-01-30 22:45:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.2.0/24,DoNotUseExternalID:,ProviderID:aws:///sa-east-1a/i-02d1af952f8cb9055,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{49753808896 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4051644416 0} {<nil>} 3956684Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{44778427933 0} {<nil>} 44778427933 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3946786816 0} {<nil>} 3854284Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-30 22:56:53 +0000 UTC,LastTransitionTime:2023-01-30 22:39:06 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-30 22:56:53 +0000 UTC,LastTransitionTime:2023-01-30 22:39:06 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-30 22:56:53 +0000 UTC,LastTransitionTime:2023-01-30 22:39:06 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-30 22:56:53 +0000 UTC,LastTransitionTime:2023-01-30 22:39:27 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.63.7,},NodeAddress{Type:ExternalIP,Address:52.67.57.31,},NodeAddress{Type:Hostname,Address:ip-172-20-63-7.sa-east-1.compute.internal,},NodeAddress{Type:InternalDNS,Address:ip-172-20-63-7.sa-east-1.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-52-67-57-31.sa-east-1.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec292dc0bb9ad655da1bd5cf4f054caa,SystemUUID:ec292dc0-bb9a-d655-da1b-d5cf4f054caa,BootID:3aa9a5e0-6628-460f-859b-942e6b19dc1d,KernelVersion:5.15.0-1028-aws,OSImage:Ubuntu 22.04.1 LTS,ContainerRuntimeVersion:docker://20.10.17,KubeletVersion:v1.23.16,KubeProxyVersion:v1.23.16,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5 quay.io/cilium/cilium:v1.12.5],SizeBytes:456569528,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:20f25f275d46aa728f7615a1ccc19c78b2ed89435bf943a44b339f70f45508e6 k8s.gcr.io/e2e-test-images/httpd:2.4.39-2],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e k8s.gcr.io/e2e-test-images/agnhost:2.39],SizeBytes:126872991,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.23.16 registry.k8s.io/kube-proxy-amd64:v1.23.16],SizeBytes:110832791,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:f0c5de192d832e7c1daa6580d4a62e8fa6fc8eabc0917ae4cb7ed4d15e95b59e registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.1],SizeBytes:94471559,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0],SizeBytes:56382329,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:4e74c0492bceddc598de1c90cc5bc14dcda94cb49fa9c5bad9d117c4834b5e08 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.1],SizeBytes:56379269,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:36c31f7e1f433c9634d24f876353e8646246d81a03c4e351202c2644daff1620 k8s.gcr.io/sig-storage/csi-resizer:v1.2.0],SizeBytes:53956308,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-snapshotter@sha256:ed98431376c9e944e19a465fe8ea944806714dd95416a0821096c78d66b579bd k8s.gcr.io/sig-storage/csi-snapshotter:v4.1.1],SizeBytes:53531990,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:c5be65d6679efabb969d9b019300d187437ae876f992c40911fd2892bbef3b36 k8s.gcr.io/sig-storage/csi-attacher:v3.2.0],SizeBytes:53500826,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:60ab9b3e6a030d3038c87c0d6bca2930f58d1d72823e6a4af09767dc83b696a2 k8s.gcr.io/sig-storage/csi-attacher:v3.2.1],SizeBytes:53499570,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:7a5ba58a44e0d749e0767e4e37315bcf6a61f33ce3185c1991848af4db0fb70a k8s.gcr.io/sig-storage/csi-resizer:v1.1.0],SizeBytes:49176472,},ContainerImage{Names:[k8s.gcr.io/sig-storage/hostpathplugin@sha256:232fe80174d60d520d36043103853a1d7ab4b7f3782cf43e45034f04ccda58ce k8s.gcr.io/sig-storage/hostpathplugin:v1.7.1],SizeBytes:29977921,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:19662887,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:19582112,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:2dee3fe5fe861bb66c3a4ac51114f3447a4cd35870e0f2e2b558c7a400d89589 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0],SizeBytes:18699436,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:17680993,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:17611980,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:529be2c9770add0cdd0c989115222ea9fc1be430c11095eb9f6dafcf98a36e2b k8s.gcr.io/sig-storage/livenessprobe:v2.4.0],SizeBytes:17175488,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:1154361,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6 registry.k8s.io/pause:3.6],SizeBytes:682696,},},VolumesInUse:[kubernetes.io/csi/ebs.csi.aws.com^vol-0bb8ad584573965a4 kubernetes.io/csi/ebs.csi.aws.com^vol-0f9eeed3999f6acd3 kubernetes.io/csi/ebs.csi.aws.com^vol-0fb5e115bfe4f75ac],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/ebs.csi.aws.com^vol-0bb8ad584573965a4,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/ebs.csi.aws.com^vol-0f9eeed3999f6acd3,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/ebs.csi.aws.com^vol-0fb5e115bfe4f75ac,DevicePath:,},},Config:nil,},} Jan 30 22:56:59.652: INFO: Logging kubelet events for node ip-172-20-63-7.sa-east-1.compute.internal Jan 30 22:56:59.803: INFO: Logging pods the kubelet thinks is on node ip-172-20-63-7.sa-east-1.compute.internal Jan 30 22:56:59.955: INFO: csi-mockplugin-attacher-0 started at 2023-01-30 22:54:59 +0000 UTC (0+1 container statuses recorded) Jan 30 22:56:59.955: INFO: Container csi-attacher ready: true, restart count 0 Jan 30 22:56:59.955: INFO: pod-9361d956-3a9e-45fa-92dc-ac8884faccaa started at 2023-01-30 22:50:26 +0000 UTC (0+1 container statuses recorded) Jan 30 22:56:59.955: INFO: Container write-pod ready: true, restart count 0 Jan 30 22:56:59.955: INFO: ebs-csi-node-wc6gx started at 2023-01-30 22:39:07 +0000 UTC (0+3 container statuses recorded) Jan 30 22:56:59.955: INFO: Container ebs-plugin ready: true, restart count 0 Jan 30 22:56:59.955: INFO: Container liveness-probe ready: true, restart count 0 Jan 30 22:56:59.955: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 30 22:56:59.955: INFO: csi-hostpathplugin-0 started at 2023-01-30 22:43:41 +0000 UTC (0+7 container statuses recorded) Jan 30 22:56:59.955: INFO: Container csi-attacher ready: true, restart count 0 Jan 30 22:56:59.955: INFO: Container csi-provisioner ready: true, restart count 0 Jan 30 22:56:59.955: INFO: Container csi-resizer ready: true, restart count 0 Jan 30 22:56:59.955: INFO: Container csi-snapshotter ready: true, restart count 0 Jan 30 22:56:59.955: INFO: Container hostpath ready: true, restart count 0 Jan 30 22:56:59.955: INFO: Container liveness-probe ready: true, restart count 0 Jan 30 22:56:59.955: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 30 22:56:59.955: INFO: inline-volume-tester-txdqz started at 2023-01-30 22:56:30 +0000 UTC (0+1 container statuses recorded) Jan 30 22:56:59.955: INFO: Container csi-volume-tester ready: true, restart count 0 Jan 30 22:56:59.955: INFO: pvc-volume-tester-7qg6f started at <nil> (0+0 container statuses recorded) Jan 30 22:56:59.955: INFO: hostexec-ip-172-20-63-7.sa-east-1.compute.internal-kvcdl started at 2023-01-30 22:56:30 +0000 UTC (0+1 container statuses recorded) Jan 30 22:56:59.955: INFO: Container agnhost-container ready: true, restart count 0 Jan 30 22:56:59.955: INFO: rs-n22dj started at 2023-01-30 22:56:32 +0000 UTC (0+1 container statuses recorded) Jan 30 22:56:59.955: INFO: Container donothing ready: false, restart count 0 Jan 30 22:56:59.955: INFO: netserver-3 started at 2023-01-30 22:56:38 +0000 UTC (0+1 container statuses recorded) Jan 30 22:56:59.955: INFO: Container webserver ready: false, restart count 0 Jan 30 22:56:59.955: INFO: agnhost-primary-69cb998d54-527k9 started at 2023-01-30 22:56:52 +0000 UTC (0+1 container statuses recorded) Jan 30 22:56:59.955: INFO: Container primary ready: true, restart count 0 Jan 30 22:56:59.956: INFO: frontend-5c4f744f96-nlch8 started at 2023-01-30 22:56:51 +0000 UTC (0+1 container statuses recorded) Jan 30 22:56:59.956: INFO: Container guestbook-frontend ready: true, restart count 0 Jan 30 22:56:59.956: INFO: csi-mockplugin-resizer-0 started at 2023-01-30 22:54:59 +0000 UTC (0+1 container statuses recorded) Jan 30 22:56:59.956: INFO: Container csi-resizer ready: true, restart count 0 Jan 30 22:56:59.956: INFO: pod-subpath-test-inlinevolume-kdcq started at 2023-01-30 22:56:30 +0000 UTC (1+1 container statuses recorded) Jan 30 22:56:59.956: INFO: Init container init-volume-inlinevolume-kdcq ready: true, restart count 0 Jan 30 22:56:59.956: INFO: Container test-container-subpath-inlinevolume-kdcq ready: false, restart count 0 Jan 30 22:56:59.956: INFO: csi-mockplugin-0 started at 2023-01-30 22:54:59 +0000 UTC (0+3 container statuses recorded) Jan 30 22:56:59.956: INFO: Container csi-provisioner ready: true, restart count 0 Jan 30 22:56:59.956: INFO: Container driver-registrar ready: true, restart count 0 Jan 30 22:56:59.956: INFO: Container mock ready: true, restart count 0 Jan 30 22:56:59.956: INFO: cilium-qtf8x started at 2023-01-30 22:39:07 +0000 UTC (1+1 container statuses recorded) Jan 30 22:56:59.956: INFO: Init container clean-cilium-state ready: true, restart count 0 Jan 30 22:56:59.956: INFO: Container cilium-agent ready: true, restart count 0 Jan 30 22:56:59.956: INFO: ss-1 started at 2023-01-30 22:56:24 +0000 UTC (0+1 container statuses recorded) Jan 30 22:56:59.956: INFO: Container webserver ready: true, restart count 0 Jan 30 22:57:00.426: INFO: Latency metrics for node ip-172-20-63-7.sa-east-1.compute.internal Jan 30 22:57:00.426: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "statefulset-5460" for this suite.
Filter through log files
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-apps\]\sStatefulSet\sBasic\sStatefulSet\sfunctionality\s\[StatefulSetBasic\]\sshould\sperform\srolling\supdates\sand\sroll\sbacks\sof\stemplate\smodifications\s\[Conformance\]$'
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Jan 30 22:54:25.998: Failed waiting for state update: timed out waiting for the condition /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/wait.go:124from junit_15.xml
[BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 30 22:42:47.634: INFO: >>> kubeConfig: /root/.kube/config �[1mSTEP�[0m: Building a namespace api object, basename statefulset W0130 22:42:48.206326 6756 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Jan 30 22:42:48.206: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:94 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:109 �[1mSTEP�[0m: Creating service test in namespace statefulset-9857 [It] should perform rolling updates and roll backs of template modifications [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a new StatefulSet Jan 30 22:42:49.274: INFO: Found 1 stateful pods, waiting for 3 Jan 30 22:42:59.416: INFO: Found 1 stateful pods, waiting for 3 Jan 30 22:43:09.418: INFO: Found 1 stateful pods, waiting for 3 Jan 30 22:43:19.417: INFO: Found 2 stateful pods, waiting for 3 Jan 30 22:43:29.417: INFO: Found 2 stateful pods, waiting for 3 Jan 30 22:43:39.418: INFO: Found 2 stateful pods, waiting for 3 Jan 30 22:43:49.419: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 30 22:43:49.419: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 30 22:43:49.419: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Jan 30 22:43:59.418: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 30 22:43:59.418: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 30 22:43:59.418: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Jan 30 22:43:59.844: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/f32618b1-a0ed-11ed-9994-daecb65d57ab/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-u2204-k23-ko26-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=statefulset-9857 exec ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 30 22:44:01.438: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Jan 30 22:44:01.438: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 30 22:44:01.438: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' �[1mSTEP�[0m: Updating StatefulSet template: update image from k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 to k8s.gcr.io/e2e-test-images/httpd:2.4.39-2 Jan 30 22:44:12.305: INFO: Updating stateful set ss2 �[1mSTEP�[0m: Creating a new revision �[1mSTEP�[0m: Updating Pods in reverse ordinal order Jan 30 22:44:12.732: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/f32618b1-a0ed-11ed-9994-daecb65d57ab/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-u2204-k23-ko26-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=statefulset-9857 exec ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 30 22:44:14.572: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Jan 30 22:44:14.572: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 30 22:44:14.572: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 30 22:44:25.427: INFO: Waiting for StatefulSet statefulset-9857/ss2 to complete update Jan 30 22:44:25.428: INFO: Waiting for Pod statefulset-9857/ss2-0 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Jan 30 22:44:25.428: INFO: Waiting for Pod statefulset-9857/ss2-1 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Jan 30 22:44:35.716: INFO: Waiting for StatefulSet statefulset-9857/ss2 to complete update Jan 30 22:44:35.716: INFO: Waiting for Pod statefulset-9857/ss2-0 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Jan 30 22:44:35.716: INFO: Waiting for Pod statefulset-9857/ss2-1 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Jan 30 22:44:45.713: INFO: Waiting for StatefulSet statefulset-9857/ss2 to complete update Jan 30 22:44:45.713: INFO: Waiting for Pod statefulset-9857/ss2-0 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Jan 30 22:44:45.713: INFO: Waiting for Pod statefulset-9857/ss2-1 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Jan 30 22:44:55.715: INFO: Waiting for StatefulSet statefulset-9857/ss2 to complete update Jan 30 22:44:55.715: INFO: Waiting for Pod statefulset-9857/ss2-0 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Jan 30 22:44:55.715: INFO: Waiting for Pod statefulset-9857/ss2-1 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Jan 30 22:45:05.716: INFO: Waiting for StatefulSet statefulset-9857/ss2 to complete update Jan 30 22:45:05.716: INFO: Waiting for Pod statefulset-9857/ss2-0 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Jan 30 22:45:05.716: INFO: Waiting for Pod statefulset-9857/ss2-1 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Jan 30 22:45:15.712: INFO: Waiting for StatefulSet statefulset-9857/ss2 to complete update Jan 30 22:45:15.712: INFO: Waiting for Pod statefulset-9857/ss2-0 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Jan 30 22:45:15.712: INFO: Waiting for Pod statefulset-9857/ss2-1 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Jan 30 22:45:25.712: INFO: Waiting for StatefulSet statefulset-9857/ss2 to complete update Jan 30 22:45:25.712: INFO: Waiting for Pod statefulset-9857/ss2-0 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Jan 30 22:45:35.714: INFO: Waiting for StatefulSet statefulset-9857/ss2 to complete update Jan 30 22:45:35.714: INFO: Waiting for Pod statefulset-9857/ss2-0 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Jan 30 22:45:45.712: INFO: Waiting for StatefulSet statefulset-9857/ss2 to complete update Jan 30 22:45:45.712: INFO: Waiting for Pod statefulset-9857/ss2-0 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Jan 30 22:45:55.716: INFO: Waiting for StatefulSet statefulset-9857/ss2 to complete update Jan 30 22:45:55.716: INFO: Waiting for Pod statefulset-9857/ss2-0 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Jan 30 22:46:05.716: INFO: Waiting for StatefulSet statefulset-9857/ss2 to complete update Jan 30 22:46:05.716: INFO: Waiting for Pod statefulset-9857/ss2-0 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Jan 30 22:46:15.712: INFO: Waiting for StatefulSet statefulset-9857/ss2 to complete update Jan 30 22:46:15.712: INFO: Waiting for Pod statefulset-9857/ss2-0 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Jan 30 22:46:25.714: INFO: Waiting for StatefulSet statefulset-9857/ss2 to complete update Jan 30 22:46:25.714: INFO: Waiting for Pod statefulset-9857/ss2-0 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Jan 30 22:46:35.713: INFO: Waiting for StatefulSet statefulset-9857/ss2 to complete update Jan 30 22:46:35.713: INFO: Waiting for Pod statefulset-9857/ss2-0 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Jan 30 22:46:45.713: INFO: Waiting for StatefulSet statefulset-9857/ss2 to complete update Jan 30 22:46:45.713: INFO: Waiting for Pod statefulset-9857/ss2-0 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Jan 30 22:46:55.715: INFO: Waiting for StatefulSet statefulset-9857/ss2 to complete update Jan 30 22:46:55.716: INFO: Waiting for Pod statefulset-9857/ss2-0 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Jan 30 22:47:05.713: INFO: Waiting for StatefulSet statefulset-9857/ss2 to complete update Jan 30 22:47:05.713: INFO: Waiting for Pod statefulset-9857/ss2-0 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Jan 30 22:47:15.712: INFO: Waiting for StatefulSet statefulset-9857/ss2 to complete update Jan 30 22:47:15.712: INFO: Waiting for Pod statefulset-9857/ss2-0 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Jan 30 22:47:25.712: INFO: Waiting for StatefulSet statefulset-9857/ss2 to complete update Jan 30 22:47:25.712: INFO: Waiting for Pod statefulset-9857/ss2-0 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Jan 30 22:47:35.714: INFO: Waiting for StatefulSet statefulset-9857/ss2 to complete update Jan 30 22:47:35.715: INFO: Waiting for Pod statefulset-9857/ss2-0 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Jan 30 22:47:45.716: INFO: Waiting for StatefulSet statefulset-9857/ss2 to complete update Jan 30 22:47:45.716: INFO: Waiting for Pod statefulset-9857/ss2-0 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Jan 30 22:47:55.712: INFO: Waiting for StatefulSet statefulset-9857/ss2 to complete update Jan 30 22:47:55.712: INFO: Waiting for Pod statefulset-9857/ss2-0 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Jan 30 22:48:05.713: INFO: Waiting for StatefulSet statefulset-9857/ss2 to complete update Jan 30 22:48:05.713: INFO: Waiting for Pod statefulset-9857/ss2-0 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Jan 30 22:48:15.716: INFO: Waiting for StatefulSet statefulset-9857/ss2 to complete update Jan 30 22:48:15.716: INFO: Waiting for Pod statefulset-9857/ss2-0 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Jan 30 22:48:25.712: INFO: Waiting for StatefulSet statefulset-9857/ss2 to complete update Jan 30 22:48:25.712: INFO: Waiting for Pod statefulset-9857/ss2-0 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Jan 30 22:48:35.712: INFO: Waiting for StatefulSet statefulset-9857/ss2 to complete update Jan 30 22:48:35.712: INFO: Waiting for Pod statefulset-9857/ss2-0 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Jan 30 22:48:45.714: INFO: Waiting for StatefulSet statefulset-9857/ss2 to complete update Jan 30 22:48:45.714: INFO: Waiting for Pod statefulset-9857/ss2-0 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Jan 30 22:48:55.712: INFO: Waiting for StatefulSet statefulset-9857/ss2 to complete update Jan 30 22:48:55.712: INFO: Waiting for Pod statefulset-9857/ss2-0 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Jan 30 22:49:05.714: INFO: Waiting for StatefulSet statefulset-9857/ss2 to complete update Jan 30 22:49:05.714: INFO: Waiting for Pod statefulset-9857/ss2-0 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Jan 30 22:49:15.712: INFO: Waiting for StatefulSet statefulset-9857/ss2 to complete update Jan 30 22:49:15.712: INFO: Waiting for Pod statefulset-9857/ss2-0 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Jan 30 22:49:25.712: INFO: Waiting for StatefulSet statefulset-9857/ss2 to complete update Jan 30 22:49:25.712: INFO: Waiting for Pod statefulset-9857/ss2-0 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Jan 30 22:49:35.713: INFO: Waiting for StatefulSet statefulset-9857/ss2 to complete update Jan 30 22:49:35.713: INFO: Waiting for Pod statefulset-9857/ss2-0 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Jan 30 22:49:45.716: INFO: Waiting for StatefulSet statefulset-9857/ss2 to complete update Jan 30 22:49:45.716: INFO: Waiting for Pod statefulset-9857/ss2-0 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Jan 30 22:49:55.712: INFO: Waiting for StatefulSet statefulset-9857/ss2 to complete update Jan 30 22:49:55.712: INFO: Waiting for Pod statefulset-9857/ss2-0 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Jan 30 22:50:05.712: INFO: Waiting for StatefulSet statefulset-9857/ss2 to complete update Jan 30 22:50:05.712: INFO: Waiting for Pod statefulset-9857/ss2-0 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Jan 30 22:50:15.716: INFO: Waiting for StatefulSet statefulset-9857/ss2 to complete update Jan 30 22:50:15.716: INFO: Waiting for Pod statefulset-9857/ss2-0 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Jan 30 22:50:25.720: INFO: Waiting for StatefulSet statefulset-9857/ss2 to complete update Jan 30 22:50:25.720: INFO: Waiting for Pod statefulset-9857/ss2-0 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Jan 30 22:50:35.714: INFO: Waiting for StatefulSet statefulset-9857/ss2 to complete update Jan 30 22:50:35.715: INFO: Waiting for Pod statefulset-9857/ss2-0 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Jan 30 22:50:45.712: INFO: Waiting for StatefulSet statefulset-9857/ss2 to complete update Jan 30 22:50:45.712: INFO: Waiting for Pod statefulset-9857/ss2-0 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Jan 30 22:50:55.716: INFO: Waiting for StatefulSet statefulset-9857/ss2 to complete update Jan 30 22:50:55.716: INFO: Waiting for Pod statefulset-9857/ss2-0 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Jan 30 22:51:05.712: INFO: Waiting for StatefulSet statefulset-9857/ss2 to complete update Jan 30 22:51:05.712: INFO: Waiting for Pod statefulset-9857/ss2-0 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Jan 30 22:51:15.721: INFO: Waiting for StatefulSet statefulset-9857/ss2 to complete update Jan 30 22:51:15.721: INFO: Waiting for Pod statefulset-9857/ss2-0 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Jan 30 22:51:25.716: INFO: Waiting for StatefulSet statefulset-9857/ss2 to complete update Jan 30 22:51:25.716: INFO: Waiting for Pod statefulset-9857/ss2-0 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Jan 30 22:51:35.716: INFO: Waiting for StatefulSet statefulset-9857/ss2 to complete update Jan 30 22:51:35.716: INFO: Waiting for Pod statefulset-9857/ss2-0 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Jan 30 22:51:45.716: INFO: Waiting for StatefulSet statefulset-9857/ss2 to complete update Jan 30 22:51:45.716: INFO: Waiting for Pod statefulset-9857/ss2-0 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Jan 30 22:51:55.712: INFO: Waiting for StatefulSet statefulset-9857/ss2 to complete update Jan 30 22:51:55.712: INFO: Waiting for Pod statefulset-9857/ss2-0 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Jan 30 22:52:05.712: INFO: Waiting for StatefulSet statefulset-9857/ss2 to complete update Jan 30 22:52:05.712: INFO: Waiting for Pod statefulset-9857/ss2-0 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Jan 30 22:52:15.715: INFO: Waiting for StatefulSet statefulset-9857/ss2 to complete update Jan 30 22:52:15.715: INFO: Waiting for Pod statefulset-9857/ss2-0 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Jan 30 22:52:25.716: INFO: Waiting for StatefulSet statefulset-9857/ss2 to complete update Jan 30 22:52:25.716: INFO: Waiting for Pod statefulset-9857/ss2-0 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Jan 30 22:52:35.712: INFO: Waiting for StatefulSet statefulset-9857/ss2 to complete update Jan 30 22:52:35.712: INFO: Waiting for Pod statefulset-9857/ss2-0 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Jan 30 22:52:45.715: INFO: Waiting for StatefulSet statefulset-9857/ss2 to complete update Jan 30 22:52:45.715: INFO: Waiting for Pod statefulset-9857/ss2-0 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Jan 30 22:52:55.715: INFO: Waiting for StatefulSet statefulset-9857/ss2 to complete update Jan 30 22:52:55.715: INFO: Waiting for Pod statefulset-9857/ss2-0 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Jan 30 22:53:05.716: INFO: Waiting for StatefulSet statefulset-9857/ss2 to complete update Jan 30 22:53:05.716: INFO: Waiting for Pod statefulset-9857/ss2-0 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Jan 30 22:53:15.715: INFO: Waiting for StatefulSet statefulset-9857/ss2 to complete update Jan 30 22:53:15.715: INFO: Waiting for Pod statefulset-9857/ss2-0 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Jan 30 22:53:25.713: INFO: Waiting for StatefulSet statefulset-9857/ss2 to complete update Jan 30 22:53:25.714: INFO: Waiting for Pod statefulset-9857/ss2-0 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Jan 30 22:53:35.712: INFO: Waiting for StatefulSet statefulset-9857/ss2 to complete update Jan 30 22:53:35.712: INFO: Waiting for Pod statefulset-9857/ss2-0 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Jan 30 22:53:45.712: INFO: Waiting for StatefulSet statefulset-9857/ss2 to complete update Jan 30 22:53:45.713: INFO: Waiting for Pod statefulset-9857/ss2-0 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Jan 30 22:53:55.713: INFO: Waiting for StatefulSet statefulset-9857/ss2 to complete update Jan 30 22:53:55.713: INFO: Waiting for Pod statefulset-9857/ss2-0 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Jan 30 22:54:05.712: INFO: Waiting for StatefulSet statefulset-9857/ss2 to complete update Jan 30 22:54:05.712: INFO: Waiting for Pod statefulset-9857/ss2-0 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Jan 30 22:54:15.715: INFO: Waiting for StatefulSet statefulset-9857/ss2 to complete update Jan 30 22:54:15.715: INFO: Waiting for Pod statefulset-9857/ss2-0 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Jan 30 22:54:25.712: INFO: Waiting for StatefulSet statefulset-9857/ss2 to complete update Jan 30 22:54:25.713: INFO: Waiting for Pod statefulset-9857/ss2-0 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Jan 30 22:54:25.997: INFO: Waiting for StatefulSet statefulset-9857/ss2 to complete update Jan 30 22:54:25.997: INFO: Waiting for Pod statefulset-9857/ss2-0 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Jan 30 22:54:25.998: FAIL: Failed waiting for state update: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/apps.waitForRollingUpdate({0x7938928, 0xc003808780}, 0xc003399900) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/wait.go:124 +0x1cc k8s.io/kubernetes/test/e2e/apps.rollbackTest({0x7938928, 0xc003808780}, {0xc004ba7950, 0x10}, 0xc003398500) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:1603 +0xabd k8s.io/kubernetes/test/e2e/apps.glob..func9.2.7() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:305 +0xe6 k8s.io/kubernetes/test/e2e.RunE2ETests(0x24eec17?) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:133 +0x6b7 k8s.io/kubernetes/test/e2e.TestE2E(0x0?) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc000722820, 0x72ecb90) /usr/local/go/src/testing/testing.go:1446 +0x10b created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1493 +0x35f [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:120 Jan 30 22:54:26.141: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/f32618b1-a0ed-11ed-9994-daecb65d57ab/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-u2204-k23-ko26-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=statefulset-9857 describe po ss2-0' Jan 30 22:54:27.771: INFO: stderr: "" Jan 30 22:54:27.771: INFO: stdout: "Name: ss2-0\nNamespace: statefulset-9857\nPriority: 0\nNode: ip-172-20-56-33.sa-east-1.compute.internal/172.20.56.33\nStart Time: Mon, 30 Jan 2023 22:42:49 +0000\nLabels: baz=blah\n controller-revision-hash=ss2-57bbdd95cb\n foo=bar\n statefulset.kubernetes.io/pod-name=ss2-0\nAnnotations: <none>\nStatus: Running\nIP: 172.20.47.221\nIPs:\n IP: 172.20.47.221\nControlled By: StatefulSet/ss2\nContainers:\n webserver:\n Container ID: docker://75c11139cd68b52c8ceeee8bcce453e3c758f107f7c61901e119a4aa57e535ed\n Image: k8s.gcr.io/e2e-test-images/httpd:2.4.38-2\n Image ID: docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3\n Port: <none>\n Host Port: <none>\n State: Running\n Started: Mon, 30 Jan 2023 22:43:15 +0000\n Ready: True\n Restart Count: 0\n Readiness: http-get http://:80/index.html delay=0s timeout=1s period=1s #success=1 #failure=1\n Environment: <none>\n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9tzkp (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n kube-api-access-9tzkp:\n Type: Projected (a volume that contains injected data from multiple sources)\n TokenExpirationSeconds: 3607\n ConfigMapName: kube-root-ca.crt\n ConfigMapOptional: <nil>\n DownwardAPI: true\nQoS Class: BestEffort\nNode-Selectors: <none>\nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 11m default-scheduler Successfully assigned statefulset-9857/ss2-0 to ip-172-20-56-33.sa-east-1.compute.internal\n Normal Pulling 11m kubelet Pulling image \"k8s.gcr.io/e2e-test-images/httpd:2.4.38-2\"\n Normal Pulled 11m kubelet Successfully pulled image \"k8s.gcr.io/e2e-test-images/httpd:2.4.38-2\" in 8.665137788s (23.804626676s including waiting)\n Normal Created 11m kubelet Created container webserver\n Normal Started 11m kubelet Started container webserver\n" Jan 30 22:54:27.771: INFO: Output of kubectl describe ss2-0: Name: ss2-0 Namespace: statefulset-9857 Priority: 0 Node: ip-172-20-56-33.sa-east-1.compute.internal/172.20.56.33 Start Time: Mon, 30 Jan 2023 22:42:49 +0000 Labels: baz=blah controller-revision-hash=ss2-57bbdd95cb foo=bar statefulset.kubernetes.io/pod-name=ss2-0 Annotations: <none> Status: Running IP: 172.20.47.221 IPs: IP: 172.20.47.221 Controlled By: StatefulSet/ss2 Containers: webserver: Container ID: docker://75c11139cd68b52c8ceeee8bcce453e3c758f107f7c61901e119a4aa57e535ed Image: k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 Image ID: docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 Port: <none> Host Port: <none> State: Running Started: Mon, 30 Jan 2023 22:43:15 +0000 Ready: True Restart Count: 0 Readiness: http-get http://:80/index.html delay=0s timeout=1s period=1s #success=1 #failure=1 Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9tzkp (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: kube-api-access-9tzkp: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: <nil> DownwardAPI: true QoS Class: BestEffort Node-Selectors: <none> Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 11m default-scheduler Successfully assigned statefulset-9857/ss2-0 to ip-172-20-56-33.sa-east-1.compute.internal Normal Pulling 11m kubelet Pulling image "k8s.gcr.io/e2e-test-images/httpd:2.4.38-2" Normal Pulled 11m kubelet Successfully pulled image "k8s.gcr.io/e2e-test-images/httpd:2.4.38-2" in 8.665137788s (23.804626676s including waiting) Normal Created 11m kubelet Created container webserver Normal Started 11m kubelet Started container webserver Jan 30 22:54:27.771: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/f32618b1-a0ed-11ed-9994-daecb65d57ab/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-u2204-k23-ko26-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=statefulset-9857 logs ss2-0 --tail=100' Jan 30 22:54:28.436: INFO: stderr: "" Jan 30 22:54:28.436: INFO: stdout: "172.20.56.33 - - [30/Jan/2023:22:52:49 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.56.33 - - [30/Jan/2023:22:52:50 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.56.33 - - [30/Jan/2023:22:52:51 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.56.33 - - [30/Jan/2023:22:52:52 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.56.33 - - [30/Jan/2023:22:52:53 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.56.33 - - [30/Jan/2023:22:52:54 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.56.33 - - [30/Jan/2023:22:52:55 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.56.33 - - [30/Jan/2023:22:52:56 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.56.33 - - [30/Jan/2023:22:52:57 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.56.33 - - [30/Jan/2023:22:52:58 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.56.33 - - [30/Jan/2023:22:52:59 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.56.33 - - [30/Jan/2023:22:53:00 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.56.33 - - [30/Jan/2023:22:53:01 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.56.33 - - [30/Jan/2023:22:53:02 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.56.33 - - [30/Jan/2023:22:53:03 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.56.33 - - [30/Jan/2023:22:53:04 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.56.33 - - [30/Jan/2023:22:53:05 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.56.33 - - [30/Jan/2023:22:53:06 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.56.33 - - [30/Jan/2023:22:53:07 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.56.33 - - [30/Jan/2023:22:53:08 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.56.33 - - [30/Jan/2023:22:53:09 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.56.33 - - [30/Jan/2023:22:53:10 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.56.33 - - [30/Jan/2023:22:53:11 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.56.33 - - [30/Jan/2023:22:53:12 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.56.33 - - [30/Jan/2023:22:53:13 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.56.33 - - [30/Jan/2023:22:53:14 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.56.33 - - [30/Jan/2023:22:53:15 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.56.33 - - [30/Jan/2023:22:53:16 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.56.33 - - [30/Jan/2023:22:53:17 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.56.33 - - [30/Jan/2023:22:53:18 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.56.33 - - [30/Jan/2023:22:53:19 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.56.33 - - [30/Jan/2023:22:53:20 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.56.33 - - [30/Jan/2023:22:53:21 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.56.33 - - [30/Jan/2023:22:53:22 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.56.33 - - [30/Jan/2023:22:53:23 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.56.33 - - [30/Jan/2023:22:53:24 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.56.33 - - [30/Jan/2023:22:53:25 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.56.33 - - [30/Jan/2023:22:53:26 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.56.33 - - [30/Jan/2023:22:53:27 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.56.33 - - [30/Jan/2023:22:53:28 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.56.33 - - [30/Jan/2023:22:53:29 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.56.33 - - [30/Jan/2023:22:53:30 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.56.33 - - [30/Jan/2023:22:53:31 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.56.33 - - [30/Jan/2023:22:53:32 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.56.33 - - [30/Jan/2023:22:53:33 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.56.33 - - [30/Jan/2023:22:53:34 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.56.33 - - [30/Jan/2023:22:53:35 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.56.33 - - [30/Jan/2023:22:53:36 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.56.33 - - [30/Jan/2023:22:53:37 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.56.33 - - [30/Jan/2023:22:53:38 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.56.33 - - [30/Jan/2023:22:53:39 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.56.33 - - [30/Jan/2023:22:53:40 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.56.33 - - [30/Jan/2023:22:53:41 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.56.33 - - [30/Jan/2023:22:53:42 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.56.33 - - [30/Jan/2023:22:53:43 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.56.33 - - [30/Jan/2023:22:53:44 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.56.33 - - [30/Jan/2023:22:53:45 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.56.33 - - [30/Jan/2023:22:53:46 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.56.33 - - [30/Jan/2023:22:53:47 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.56.33 - - [30/Jan/2023:22:53:48 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.56.33 - - [30/Jan/2023:22:53:49 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.56.33 - - [30/Jan/2023:22:53:50 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.56.33 - - [30/Jan/2023:22:53:51 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.56.33 - - [30/Jan/2023:22:53:52 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.56.33 - - [30/Jan/2023:22:53:53 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.56.33 - - [30/Jan/2023:22:53:54 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.56.33 - - [30/Jan/2023:22:53:55 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.56.33 - - [30/Jan/2023:22:53:56 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.56.33 - - [30/Jan/2023:22:53:57 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.56.33 - - [30/Jan/2023:22:53:58 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.56.33 - - [30/Jan/2023:22:53:59 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.56.33 - - [30/Jan/2023:22:54:00 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.56.33 - - [30/Jan/2023:22:54:01 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.56.33 - - [30/Jan/2023:22:54:02 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.56.33 - - [30/Jan/2023:22:54:03 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.56.33 - - [30/Jan/2023:22:54:04 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.56.33 - - [30/Jan/2023:22:54:05 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.56.33 - - [30/Jan/2023:22:54:06 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.56.33 - - [30/Jan/2023:22:54:07 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.56.33 - - [30/Jan/2023:22:54:08 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.56.33 - - [30/Jan/2023:22:54:09 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.56.33 - - [30/Jan/2023:22:54:10 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.56.33 - - [30/Jan/2023:22:54:11 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.56.33 - - [30/Jan/2023:22:54:12 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.56.33 - - [30/Jan/2023:22:54:13 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.56.33 - - [30/Jan/2023:22:54:14 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.56.33 - - [30/Jan/2023:22:54:15 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.56.33 - - [30/Jan/2023:22:54:16 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.56.33 - - [30/Jan/2023:22:54:17 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.56.33 - - [30/Jan/2023:22:54:18 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.56.33 - - [30/Jan/2023:22:54:19 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.56.33 - - [30/Jan/2023:22:54:20 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.56.33 - - [30/Jan/2023:22:54:21 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.56.33 - - [30/Jan/2023:22:54:22 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.56.33 - - [30/Jan/2023:22:54:23 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.56.33 - - [30/Jan/2023:22:54:24 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.56.33 - - [30/Jan/2023:22:54:25 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.56.33 - - [30/Jan/2023:22:54:26 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.56.33 - - [30/Jan/2023:22:54:27 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.56.33 - - [30/Jan/2023:22:54:28 +0000] \"GET /index.html HTTP/1.1\" 200 45\n" Jan 30 22:54:28.437: INFO: Last 100 log lines of ss2-0: 172.20.56.33 - - [30/Jan/2023:22:52:49 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.56.33 - - [30/Jan/2023:22:52:50 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.56.33 - - [30/Jan/2023:22:52:51 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.56.33 - - [30/Jan/2023:22:52:52 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.56.33 - - [30/Jan/2023:22:52:53 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.56.33 - - [30/Jan/2023:22:52:54 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.56.33 - - [30/Jan/2023:22:52:55 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.56.33 - - [30/Jan/2023:22:52:56 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.56.33 - - [30/Jan/2023:22:52:57 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.56.33 - - [30/Jan/2023:22:52:58 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.56.33 - - [30/Jan/2023:22:52:59 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.56.33 - - [30/Jan/2023:22:53:00 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.56.33 - - [30/Jan/2023:22:53:01 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.56.33 - - [30/Jan/2023:22:53:02 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.56.33 - - [30/Jan/2023:22:53:03 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.56.33 - - [30/Jan/2023:22:53:04 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.56.33 - - [30/Jan/2023:22:53:05 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.56.33 - - [30/Jan/2023:22:53:06 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.56.33 - - [30/Jan/2023:22:53:07 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.56.33 - - [30/Jan/2023:22:53:08 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.56.33 - - [30/Jan/2023:22:53:09 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.56.33 - - [30/Jan/2023:22:53:10 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.56.33 - - [30/Jan/2023:22:53:11 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.56.33 - - [30/Jan/2023:22:53:12 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.56.33 - - [30/Jan/2023:22:53:13 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.56.33 - - [30/Jan/2023:22:53:14 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.56.33 - - [30/Jan/2023:22:53:15 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.56.33 - - [30/Jan/2023:22:53:16 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.56.33 - - [30/Jan/2023:22:53:17 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.56.33 - - [30/Jan/2023:22:53:18 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.56.33 - - [30/Jan/2023:22:53:19 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.56.33 - - [30/Jan/2023:22:53:20 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.56.33 - - [30/Jan/2023:22:53:21 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.56.33 - - [30/Jan/2023:22:53:22 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.56.33 - - [30/Jan/2023:22:53:23 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.56.33 - - [30/Jan/2023:22:53:24 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.56.33 - - [30/Jan/2023:22:53:25 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.56.33 - - [30/Jan/2023:22:53:26 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.56.33 - - [30/Jan/2023:22:53:27 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.56.33 - - [30/Jan/2023:22:53:28 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.56.33 - - [30/Jan/2023:22:53:29 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.56.33 - - [30/Jan/2023:22:53:30 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.56.33 - - [30/Jan/2023:22:53:31 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.56.33 - - [30/Jan/2023:22:53:32 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.56.33 - - [30/Jan/2023:22:53:33 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.56.33 - - [30/Jan/2023:22:53:34 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.56.33 - - [30/Jan/2023:22:53:35 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.56.33 - - [30/Jan/2023:22:53:36 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.56.33 - - [30/Jan/2023:22:53:37 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.56.33 - - [30/Jan/2023:22:53:38 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.56.33 - - [30/Jan/2023:22:53:39 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.56.33 - - [30/Jan/2023:22:53:40 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.56.33 - - [30/Jan/2023:22:53:41 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.56.33 - - [30/Jan/2023:22:53:42 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.56.33 - - [30/Jan/2023:22:53:43 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.56.33 - - [30/Jan/2023:22:53:44 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.56.33 - - [30/Jan/2023:22:53:45 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.56.33 - - [30/Jan/2023:22:53:46 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.56.33 - - [30/Jan/2023:22:53:47 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.56.33 - - [30/Jan/2023:22:53:48 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.56.33 - - [30/Jan/2023:22:53:49 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.56.33 - - [30/Jan/2023:22:53:50 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.56.33 - - [30/Jan/2023:22:53:51 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.56.33 - - [30/Jan/2023:22:53:52 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.56.33 - - [30/Jan/2023:22:53:53 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.56.33 - - [30/Jan/2023:22:53:54 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.56.33 - - [30/Jan/2023:22:53:55 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.56.33 - - [30/Jan/2023:22:53:56 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.56.33 - - [30/Jan/2023:22:53:57 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.56.33 - - [30/Jan/2023:22:53:58 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.56.33 - - [30/Jan/2023:22:53:59 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.56.33 - - [30/Jan/2023:22:54:00 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.56.33 - - [30/Jan/2023:22:54:01 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.56.33 - - [30/Jan/2023:22:54:02 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.56.33 - - [30/Jan/2023:22:54:03 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.56.33 - - [30/Jan/2023:22:54:04 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.56.33 - - [30/Jan/2023:22:54:05 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.56.33 - - [30/Jan/2023:22:54:06 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.56.33 - - [30/Jan/2023:22:54:07 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.56.33 - - [30/Jan/2023:22:54:08 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.56.33 - - [30/Jan/2023:22:54:09 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.56.33 - - [30/Jan/2023:22:54:10 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.56.33 - - [30/Jan/2023:22:54:11 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.56.33 - - [30/Jan/2023:22:54:12 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.56.33 - - [30/Jan/2023:22:54:13 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.56.33 - - [30/Jan/2023:22:54:14 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.56.33 - - [30/Jan/2023:22:54:15 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.56.33 - - [30/Jan/2023:22:54:16 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.56.33 - - [30/Jan/2023:22:54:17 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.56.33 - - [30/Jan/2023:22:54:18 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.56.33 - - [30/Jan/2023:22:54:19 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.56.33 - - [30/Jan/2023:22:54:20 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.56.33 - - [30/Jan/2023:22:54:21 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.56.33 - - [30/Jan/2023:22:54:22 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.56.33 - - [30/Jan/2023:22:54:23 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.56.33 - - [30/Jan/2023:22:54:24 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.56.33 - - [30/Jan/2023:22:54:25 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.56.33 - - [30/Jan/2023:22:54:26 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.56.33 - - [30/Jan/2023:22:54:27 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.56.33 - - [30/Jan/2023:22:54:28 +0000] "GET /index.html HTTP/1.1" 200 45 Jan 30 22:54:28.437: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/f32618b1-a0ed-11ed-9994-daecb65d57ab/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-u2204-k23-ko26-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=statefulset-9857 describe po ss2-1' Jan 30 22:54:29.233: INFO: stderr: "" Jan 30 22:54:29.233: INFO: stdout: "Name: ss2-1\nNamespace: statefulset-9857\nPriority: 0\nNode: ip-172-20-63-7.sa-east-1.compute.internal/172.20.63.7\nStart Time: Mon, 30 Jan 2023 22:45:23 +0000\nLabels: baz=blah\n controller-revision-hash=ss2-5f8764d585\n foo=bar\n statefulset.kubernetes.io/pod-name=ss2-1\nAnnotations: <none>\nStatus: Pending\nIP: \nIPs: <none>\nControlled By: StatefulSet/ss2\nContainers:\n webserver:\n Container ID: \n Image: k8s.gcr.io/e2e-test-images/httpd:2.4.39-2\n Image ID: \n Port: <none>\n Host Port: <none>\n State: Waiting\n Reason: ContainerCreating\n Ready: False\n Restart Count: 0\n Readiness: http-get http://:80/index.html delay=0s timeout=1s period=1s #success=1 #failure=1\n Environment: <none>\n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-bvvcv (ro)\nConditions:\n Type Status\n Initialized True \n Ready False \n ContainersReady False \n PodScheduled True \nVolumes:\n kube-api-access-bvvcv:\n Type: Projected (a volume that contains injected data from multiple sources)\n TokenExpirationSeconds: 3607\n ConfigMapName: kube-root-ca.crt\n ConfigMapOptional: <nil>\n DownwardAPI: true\nQoS Class: BestEffort\nNode-Selectors: <none>\nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 9m6s default-scheduler Successfully assigned statefulset-9857/ss2-1 to ip-172-20-63-7.sa-east-1.compute.internal\n Warning FailedCreatePodSandBox 9m3s kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container \"0ea7dfed61ef9431b761794ecf8be5cd0d23de6ede1afa6fa4a4471c42ca96f3\" network for pod \"ss2-1\": networkPlugin cni failed to set up pod \"ss2-1_statefulset-9857\" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available\n Warning FailedCreatePodSandBox 8m58s kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container \"49325abc3b6858ab381ada62278b1d37739f462da14a16d5c0a58460260f8e69\" network for pod \"ss2-1\": networkPlugin cni failed to set up pod \"ss2-1_statefulset-9857\" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available\n Warning FailedCreatePodSandBox 8m54s kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container \"6e0aa29ac64bb88b83f0a15e4667cce5ad4acc7e1d28b36f1cd90a3542c11043\" network for pod \"ss2-1\": networkPlugin cni failed to set up pod \"ss2-1_statefulset-9857\" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available\n Warning FailedCreatePodSandBox 8m47s kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container \"78f6564e4d4cfe2e789940f40fe5a554b96f96d881d0119a8f460102eb955a40\" network for pod \"ss2-1\": networkPlugin cni failed to set up pod \"ss2-1_statefulset-9857\" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available\n Warning FailedCreatePodSandBox 8m41s kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container \"f2f5d079e26214bec545e7a41f389ca6357c55ce6cf12a55cd404eb2c0d46a38\" network for pod \"ss2-1\": networkPlugin cni failed to set up pod \"ss2-1_statefulset-9857\" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available\n Warning FailedCreatePodSandBox 8m37s kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container \"3a96ce780780711980403982cac32ca7d2ad3948204db43fc2424174ee866ece\" network for pod \"ss2-1\": networkPlugin cni failed to set up pod \"ss2-1_statefulset-9857\" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available\n Warning FailedCreatePodSandBox 8m33s kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container \"cf20f35974be4121798a2721a2b8a307c769184d4b561a640a4bd3e7b372b0bb\" network for pod \"ss2-1\": networkPlugin cni failed to set up pod \"ss2-1_statefulset-9857\" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available\n Warning FailedCreatePodSandBox 8m28s kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container \"ddae6478c7cd0d897a3268e09c589b6632e0f73c18f8d96a18436edaa1c4e830\" network for pod \"ss2-1\": networkPlugin cni failed to set up pod \"ss2-1_statefulset-9857\" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available\n Warning FailedCreatePodSandBox 8m23s kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container \"32654514be3c3604089d9ab7cf3da0c1d050442b1f09ade0dd97b52912189752\" network for pod \"ss2-1\": networkPlugin cni failed to set up pod \"ss2-1_statefulset-9857\" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available\n Normal SandboxChanged 8m7s (x12 over 9m2s) kubelet Pod sandbox changed, it will be killed and re-created.\n Warning FailedCreatePodSandBox 3m58s (x53 over 8m17s) kubelet (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container \"772c26e4e7840df628779a752dbe79f18a67cb2de6f180dc617ba9fa5ac90a6c\" network for pod \"ss2-1\": networkPlugin cni failed to set up pod \"ss2-1_statefulset-9857\" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available\n" Jan 30 22:54:29.233: INFO: Output of kubectl describe ss2-1: Name: ss2-1 Namespace: statefulset-9857 Priority: 0 Node: ip-172-20-63-7.sa-east-1.compute.internal/172.20.63.7 Start Time: Mon, 30 Jan 2023 22:45:23 +0000 Labels: baz=blah controller-revision-hash=ss2-5f8764d585 foo=bar statefulset.kubernetes.io/pod-name=ss2-1 Annotations: <none> Status: Pending IP: IPs: <none> Controlled By: StatefulSet/ss2 Containers: webserver: Container ID: Image: k8s.gcr.io/e2e-test-images/httpd:2.4.39-2 Image ID: Port: <none> Host Port: <none> State: Waiting Reason: ContainerCreating Ready: False Restart Count: 0 Readiness: http-get http://:80/index.html delay=0s timeout=1s period=1s #success=1 #failure=1 Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-bvvcv (ro) Conditions: Type Status Initialized True Ready False ContainersReady False PodScheduled True Volumes: kube-api-access-bvvcv: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: <nil> DownwardAPI: true QoS Class: BestEffort Node-Selectors: <none> Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 9m6s default-scheduler Successfully assigned statefulset-9857/ss2-1 to ip-172-20-63-7.sa-east-1.compute.internal Warning FailedCreatePodSandBox 9m3s kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "0ea7dfed61ef9431b761794ecf8be5cd0d23de6ede1afa6fa4a4471c42ca96f3" network for pod "ss2-1": networkPlugin cni failed to set up pod "ss2-1_statefulset-9857" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Warning FailedCreatePodSandBox 8m58s kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "49325abc3b6858ab381ada62278b1d37739f462da14a16d5c0a58460260f8e69" network for pod "ss2-1": networkPlugin cni failed to set up pod "ss2-1_statefulset-9857" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Warning FailedCreatePodSandBox 8m54s kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "6e0aa29ac64bb88b83f0a15e4667cce5ad4acc7e1d28b36f1cd90a3542c11043" network for pod "ss2-1": networkPlugin cni failed to set up pod "ss2-1_statefulset-9857" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Warning FailedCreatePodSandBox 8m47s kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "78f6564e4d4cfe2e789940f40fe5a554b96f96d881d0119a8f460102eb955a40" network for pod "ss2-1": networkPlugin cni failed to set up pod "ss2-1_statefulset-9857" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Warning FailedCreatePodSandBox 8m41s kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "f2f5d079e26214bec545e7a41f389ca6357c55ce6cf12a55cd404eb2c0d46a38" network for pod "ss2-1": networkPlugin cni failed to set up pod "ss2-1_statefulset-9857" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Warning FailedCreatePodSandBox 8m37s kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "3a96ce780780711980403982cac32ca7d2ad3948204db43fc2424174ee866ece" network for pod "ss2-1": networkPlugin cni failed to set up pod "ss2-1_statefulset-9857" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Warning FailedCreatePodSandBox 8m33s kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "cf20f35974be4121798a2721a2b8a307c769184d4b561a640a4bd3e7b372b0bb" network for pod "ss2-1": networkPlugin cni failed to set up pod "ss2-1_statefulset-9857" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Warning FailedCreatePodSandBox 8m28s kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "ddae6478c7cd0d897a3268e09c589b6632e0f73c18f8d96a18436edaa1c4e830" network for pod "ss2-1": networkPlugin cni failed to set up pod "ss2-1_statefulset-9857" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Warning FailedCreatePodSandBox 8m23s kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "32654514be3c3604089d9ab7cf3da0c1d050442b1f09ade0dd97b52912189752" network for pod "ss2-1": networkPlugin cni failed to set up pod "ss2-1_statefulset-9857" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Normal SandboxChanged 8m7s (x12 over 9m2s) kubelet Pod sandbox changed, it will be killed and re-created. Warning FailedCreatePodSandBox 3m58s (x53 over 8m17s) kubelet (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "772c26e4e7840df628779a752dbe79f18a67cb2de6f180dc617ba9fa5ac90a6c" network for pod "ss2-1": networkPlugin cni failed to set up pod "ss2-1_statefulset-9857" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 30 22:54:29.233: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/f32618b1-a0ed-11ed-9994-daecb65d57ab/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-u2204-k23-ko26-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=statefulset-9857 logs ss2-1 --tail=100' Jan 30 22:54:29.901: INFO: rc: 1 Jan 30 22:54:29.901: INFO: Last 100 log lines of ss2-1: Jan 30 22:54:29.901: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/f32618b1-a0ed-11ed-9994-daecb65d57ab/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-u2204-k23-ko26-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=statefulset-9857 describe po ss2-2' Jan 30 22:54:30.705: INFO: stderr: "" Jan 30 22:54:30.705: INFO: stdout: "Name: ss2-2\nNamespace: statefulset-9857\nPriority: 0\nNode: ip-172-20-46-143.sa-east-1.compute.internal/172.20.46.143\nStart Time: Mon, 30 Jan 2023 22:44:16 +0000\nLabels: baz=blah\n controller-revision-hash=ss2-5f8764d585\n foo=bar\n statefulset.kubernetes.io/pod-name=ss2-2\nAnnotations: <none>\nStatus: Running\nIP: 172.20.35.178\nIPs:\n IP: 172.20.35.178\nControlled By: StatefulSet/ss2\nContainers:\n webserver:\n Container ID: docker://951eefb0af96dc7814f6368df187f608dcf0e35e6d819f26118aa10cde47bbd5\n Image: k8s.gcr.io/e2e-test-images/httpd:2.4.39-2\n Image ID: docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:20f25f275d46aa728f7615a1ccc19c78b2ed89435bf943a44b339f70f45508e6\n Port: <none>\n Host Port: <none>\n State: Running\n Started: Mon, 30 Jan 2023 22:45:13 +0000\n Ready: True\n Restart Count: 0\n Readiness: http-get http://:80/index.html delay=0s timeout=1s period=1s #success=1 #failure=1\n Environment: <none>\n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-4lss6 (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n kube-api-access-4lss6:\n Type: Projected (a volume that contains injected data from multiple sources)\n TokenExpirationSeconds: 3607\n ConfigMapName: kube-root-ca.crt\n ConfigMapOptional: <nil>\n DownwardAPI: true\nQoS Class: BestEffort\nNode-Selectors: <none>\nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 10m default-scheduler Successfully assigned statefulset-9857/ss2-2 to ip-172-20-46-143.sa-east-1.compute.internal\n Warning FailedCreatePodSandBox 10m kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container \"8291a652619cebddbdc089ecc49cb6e64783aa7ab93a7d1b206e697587fe76f5\" network for pod \"ss2-2\": networkPlugin cni failed to set up pod \"ss2-2_statefulset-9857\" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available\n Warning FailedCreatePodSandBox 10m kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container \"c6b0ff021cd2b5e7271c758916e5081ac897202ede4bca75e860e8fc756e5456\" network for pod \"ss2-2\": networkPlugin cni failed to set up pod \"ss2-2_statefulset-9857\" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available\n Warning FailedCreatePodSandBox 10m kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container \"f9de173792d402985820092fc54c37b6ea724792d6fb69cd71b7fcf83cdf2593\" network for pod \"ss2-2\": networkPlugin cni failed to set up pod \"ss2-2_statefulset-9857\" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available\n Warning FailedCreatePodSandBox 10m kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container \"23e3ea6d8be97cc116265e70d41721e0848d0e27c875a7b3e9cfaded2b05c3f7\" network for pod \"ss2-2\": networkPlugin cni failed to set up pod \"ss2-2_statefulset-9857\" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available\n Warning FailedCreatePodSandBox 9m59s kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container \"0a1a4df4012fcc8d3f0b7716c3338a4bb7245100c950bdf37779ad3724b4a313\" network for pod \"ss2-2\": networkPlugin cni failed to set up pod \"ss2-2_statefulset-9857\" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available\n Warning FailedCreatePodSandBox 9m57s kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container \"a297c69e3bfebf7127e0623a1c71903e447af12f5a14b75d2a501e086bbf4e95\" network for pod \"ss2-2\": networkPlugin cni failed to set up pod \"ss2-2_statefulset-9857\" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available\n Warning FailedCreatePodSandBox 9m53s kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container \"dd9bed58ff77f57301f7d6c6824101c71deb718bb688edc2b595026aaab70925\" network for pod \"ss2-2\": networkPlugin cni failed to set up pod \"ss2-2_statefulset-9857\" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available\n Warning FailedCreatePodSandBox 9m49s kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container \"5011860bbf840ec6765430104142be6529b2b34970fae3c86d8694ccd2721cac\" network for pod \"ss2-2\": networkPlugin cni failed to set up pod \"ss2-2_statefulset-9857\" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available\n Warning FailedCreatePodSandBox 9m46s kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container \"ae9bc458abb7e0759a41caacb695bfc9d0acb922945a5e014d46fc7038e18cf7\" network for pod \"ss2-2\": networkPlugin cni failed to set up pod \"ss2-2_statefulset-9857\" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available\n Normal SandboxChanged 9m37s (x12 over 10m) kubelet Pod sandbox changed, it will be killed and re-created.\n Warning FailedCreatePodSandBox 9m35s (x4 over 9m44s) kubelet (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container \"c3c7d919ed5b75bc71fb15eafac5b0c2861ea72f6940d2ce0f0a68a021df4674\" network for pod \"ss2-2\": networkPlugin cni failed to set up pod \"ss2-2_statefulset-9857\" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available\n" Jan 30 22:54:30.706: INFO: Output of kubectl describe ss2-2: Name: ss2-2 Namespace: statefulset-9857 Priority: 0 Node: ip-172-20-46-143.sa-east-1.compute.internal/172.20.46.143 Start Time: Mon, 30 Jan 2023 22:44:16 +0000 Labels: baz=blah controller-revision-hash=ss2-5f8764d585 foo=bar statefulset.kubernetes.io/pod-name=ss2-2 Annotations: <none> Status: Running IP: 172.20.35.178 IPs: IP: 172.20.35.178 Controlled By: StatefulSet/ss2 Containers: webserver: Container ID: docker://951eefb0af96dc7814f6368df187f608dcf0e35e6d819f26118aa10cde47bbd5 Image: k8s.gcr.io/e2e-test-images/httpd:2.4.39-2 Image ID: docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:20f25f275d46aa728f7615a1ccc19c78b2ed89435bf943a44b339f70f45508e6 Port: <none> Host Port: <none> State: Running Started: Mon, 30 Jan 2023 22:45:13 +0000 Ready: True Restart Count: 0 Readiness: http-get http://:80/index.html delay=0s timeout=1s period=1s #success=1 #failure=1 Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-4lss6 (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: kube-api-access-4lss6: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: <nil> DownwardAPI: true QoS Class: BestEffort Node-Selectors: <none> Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 10m default-scheduler Successfully assigned statefulset-9857/ss2-2 to ip-172-20-46-143.sa-east-1.compute.internal Warning FailedCreatePodSandBox 10m kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "8291a652619cebddbdc089ecc49cb6e64783aa7ab93a7d1b206e697587fe76f5" network for pod "ss2-2": networkPlugin cni failed to set up pod "ss2-2_statefulset-9857" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Warning FailedCreatePodSandBox 10m kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "c6b0ff021cd2b5e7271c758916e5081ac897202ede4bca75e860e8fc756e5456" network for pod "ss2-2": networkPlugin cni failed to set up pod "ss2-2_statefulset-9857" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Warning FailedCreatePodSandBox 10m kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "f9de173792d402985820092fc54c37b6ea724792d6fb69cd71b7fcf83cdf2593" network for pod "ss2-2": networkPlugin cni failed to set up pod "ss2-2_statefulset-9857" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Warning FailedCreatePodSandBox 10m kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "23e3ea6d8be97cc116265e70d41721e0848d0e27c875a7b3e9cfaded2b05c3f7" network for pod "ss2-2": networkPlugin cni failed to set up pod "ss2-2_statefulset-9857" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Warning FailedCreatePodSandBox 9m59s kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "0a1a4df4012fcc8d3f0b7716c3338a4bb7245100c950bdf37779ad3724b4a313" network for pod "ss2-2": networkPlugin cni failed to set up pod "ss2-2_statefulset-9857" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Warning FailedCreatePodSandBox 9m57s kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "a297c69e3bfebf7127e0623a1c71903e447af12f5a14b75d2a501e086bbf4e95" network for pod "ss2-2": networkPlugin cni failed to set up pod "ss2-2_statefulset-9857" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Warning FailedCreatePodSandBox 9m53s kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "dd9bed58ff77f57301f7d6c6824101c71deb718bb688edc2b595026aaab70925" network for pod "ss2-2": networkPlugin cni failed to set up pod "ss2-2_statefulset-9857" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Warning FailedCreatePodSandBox 9m49s kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "5011860bbf840ec6765430104142be6529b2b34970fae3c86d8694ccd2721cac" network for pod "ss2-2": networkPlugin cni failed to set up pod "ss2-2_statefulset-9857" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Warning FailedCreatePodSandBox 9m46s kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "ae9bc458abb7e0759a41caacb695bfc9d0acb922945a5e014d46fc7038e18cf7" network for pod "ss2-2": networkPlugin cni failed to set up pod "ss2-2_statefulset-9857" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Normal SandboxChanged 9m37s (x12 over 10m) kubelet Pod sandbox changed, it will be killed and re-created. Warning FailedCreatePodSandBox 9m35s (x4 over 9m44s) kubelet (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "c3c7d919ed5b75bc71fb15eafac5b0c2861ea72f6940d2ce0f0a68a021df4674" network for pod "ss2-2": networkPlugin cni failed to set up pod "ss2-2_statefulset-9857" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 30 22:54:30.706: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/f32618b1-a0ed-11ed-9994-daecb65d57ab/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-u2204-k23-ko26-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=statefulset-9857 logs ss2-2 --tail=100' Jan 30 22:54:31.366: INFO: stderr: "" Jan 30 22:54:31.366: INFO: stdout: "172.20.46.143 - - [30/Jan/2023:22:52:51 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.46.143 - - [30/Jan/2023:22:52:52 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.46.143 - - [30/Jan/2023:22:52:53 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.46.143 - - [30/Jan/2023:22:52:54 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.46.143 - - [30/Jan/2023:22:52:55 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.46.143 - - [30/Jan/2023:22:52:56 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.46.143 - - [30/Jan/2023:22:52:57 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.46.143 - - [30/Jan/2023:22:52:58 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.46.143 - - [30/Jan/2023:22:52:59 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.46.143 - - [30/Jan/2023:22:53:00 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.46.143 - - [30/Jan/2023:22:53:01 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.46.143 - - [30/Jan/2023:22:53:02 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.46.143 - - [30/Jan/2023:22:53:03 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.46.143 - - [30/Jan/2023:22:53:04 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.46.143 - - [30/Jan/2023:22:53:05 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.46.143 - - [30/Jan/2023:22:53:06 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.46.143 - - [30/Jan/2023:22:53:07 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.46.143 - - [30/Jan/2023:22:53:08 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.46.143 - - [30/Jan/2023:22:53:09 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.46.143 - - [30/Jan/2023:22:53:10 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.46.143 - - [30/Jan/2023:22:53:11 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.46.143 - - [30/Jan/2023:22:53:12 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.46.143 - - [30/Jan/2023:22:53:13 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.46.143 - - [30/Jan/2023:22:53:14 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.46.143 - - [30/Jan/2023:22:53:15 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.46.143 - - [30/Jan/2023:22:53:16 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.46.143 - - [30/Jan/2023:22:53:17 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.46.143 - - [30/Jan/2023:22:53:18 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.46.143 - - [30/Jan/2023:22:53:19 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.46.143 - - [30/Jan/2023:22:53:20 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.46.143 - - [30/Jan/2023:22:53:21 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.46.143 - - [30/Jan/2023:22:53:22 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.46.143 - - [30/Jan/2023:22:53:23 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.46.143 - - [30/Jan/2023:22:53:24 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.46.143 - - [30/Jan/2023:22:53:25 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.46.143 - - [30/Jan/2023:22:53:26 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.46.143 - - [30/Jan/2023:22:53:27 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.46.143 - - [30/Jan/2023:22:53:28 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.46.143 - - [30/Jan/2023:22:53:29 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.46.143 - - [30/Jan/2023:22:53:30 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.46.143 - - [30/Jan/2023:22:53:31 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.46.143 - - [30/Jan/2023:22:53:32 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.46.143 - - [30/Jan/2023:22:53:33 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.46.143 - - [30/Jan/2023:22:53:34 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.46.143 - - [30/Jan/2023:22:53:35 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.46.143 - - [30/Jan/2023:22:53:36 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.46.143 - - [30/Jan/2023:22:53:37 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.46.143 - - [30/Jan/2023:22:53:38 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.46.143 - - [30/Jan/2023:22:53:39 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.46.143 - - [30/Jan/2023:22:53:40 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.46.143 - - [30/Jan/2023:22:53:41 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.46.143 - - [30/Jan/2023:22:53:42 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.46.143 - - [30/Jan/2023:22:53:43 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.46.143 - - [30/Jan/2023:22:53:44 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.46.143 - - [30/Jan/2023:22:53:45 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.46.143 - - [30/Jan/2023:22:53:46 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.46.143 - - [30/Jan/2023:22:53:47 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.46.143 - - [30/Jan/2023:22:53:48 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.46.143 - - [30/Jan/2023:22:53:49 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.46.143 - - [30/Jan/2023:22:53:50 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.46.143 - - [30/Jan/2023:22:53:51 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.46.143 - - [30/Jan/2023:22:53:52 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.46.143 - - [30/Jan/2023:22:53:53 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.46.143 - - [30/Jan/2023:22:53:54 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.46.143 - - [30/Jan/2023:22:53:55 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.46.143 - - [30/Jan/2023:22:53:56 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.46.143 - - [30/Jan/2023:22:53:57 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.46.143 - - [30/Jan/2023:22:53:58 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.46.143 - - [30/Jan/2023:22:53:59 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.46.143 - - [30/Jan/2023:22:54:00 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.46.143 - - [30/Jan/2023:22:54:01 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.46.143 - - [30/Jan/2023:22:54:02 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.46.143 - - [30/Jan/2023:22:54:03 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.46.143 - - [30/Jan/2023:22:54:04 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.46.143 - - [30/Jan/2023:22:54:05 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.46.143 - - [30/Jan/2023:22:54:06 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.46.143 - - [30/Jan/2023:22:54:07 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.46.143 - - [30/Jan/2023:22:54:08 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.46.143 - - [30/Jan/2023:22:54:09 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.46.143 - - [30/Jan/2023:22:54:10 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.46.143 - - [30/Jan/2023:22:54:11 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.46.143 - - [30/Jan/2023:22:54:12 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.46.143 - - [30/Jan/2023:22:54:13 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.46.143 - - [30/Jan/2023:22:54:14 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.46.143 - - [30/Jan/2023:22:54:15 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.46.143 - - [30/Jan/2023:22:54:16 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.46.143 - - [30/Jan/2023:22:54:17 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.46.143 - - [30/Jan/2023:22:54:18 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.46.143 - - [30/Jan/2023:22:54:19 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.46.143 - - [30/Jan/2023:22:54:20 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.46.143 - - [30/Jan/2023:22:54:21 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.46.143 - - [30/Jan/2023:22:54:22 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.46.143 - - [30/Jan/2023:22:54:23 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.46.143 - - [30/Jan/2023:22:54:24 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.46.143 - - [30/Jan/2023:22:54:25 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.46.143 - - [30/Jan/2023:22:54:26 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.46.143 - - [30/Jan/2023:22:54:27 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.46.143 - - [30/Jan/2023:22:54:28 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.46.143 - - [30/Jan/2023:22:54:29 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.46.143 - - [30/Jan/2023:22:54:30 +0000] \"GET /index.html HTTP/1.1\" 200 45\n" Jan 30 22:54:31.366: INFO: Last 100 log lines of ss2-2: 172.20.46.143 - - [30/Jan/2023:22:52:51 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.46.143 - - [30/Jan/2023:22:52:52 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.46.143 - - [30/Jan/2023:22:52:53 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.46.143 - - [30/Jan/2023:22:52:54 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.46.143 - - [30/Jan/2023:22:52:55 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.46.143 - - [30/Jan/2023:22:52:56 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.46.143 - - [30/Jan/2023:22:52:57 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.46.143 - - [30/Jan/2023:22:52:58 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.46.143 - - [30/Jan/2023:22:52:59 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.46.143 - - [30/Jan/2023:22:53:00 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.46.143 - - [30/Jan/2023:22:53:01 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.46.143 - - [30/Jan/2023:22:53:02 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.46.143 - - [30/Jan/2023:22:53:03 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.46.143 - - [30/Jan/2023:22:53:04 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.46.143 - - [30/Jan/2023:22:53:05 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.46.143 - - [30/Jan/2023:22:53:06 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.46.143 - - [30/Jan/2023:22:53:07 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.46.143 - - [30/Jan/2023:22:53:08 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.46.143 - - [30/Jan/2023:22:53:09 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.46.143 - - [30/Jan/2023:22:53:10 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.46.143 - - [30/Jan/2023:22:53:11 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.46.143 - - [30/Jan/2023:22:53:12 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.46.143 - - [30/Jan/2023:22:53:13 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.46.143 - - [30/Jan/2023:22:53:14 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.46.143 - - [30/Jan/2023:22:53:15 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.46.143 - - [30/Jan/2023:22:53:16 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.46.143 - - [30/Jan/2023:22:53:17 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.46.143 - - [30/Jan/2023:22:53:18 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.46.143 - - [30/Jan/2023:22:53:19 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.46.143 - - [30/Jan/2023:22:53:20 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.46.143 - - [30/Jan/2023:22:53:21 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.46.143 - - [30/Jan/2023:22:53:22 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.46.143 - - [30/Jan/2023:22:53:23 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.46.143 - - [30/Jan/2023:22:53:24 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.46.143 - - [30/Jan/2023:22:53:25 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.46.143 - - [30/Jan/2023:22:53:26 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.46.143 - - [30/Jan/2023:22:53:27 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.46.143 - - [30/Jan/2023:22:53:28 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.46.143 - - [30/Jan/2023:22:53:29 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.46.143 - - [30/Jan/2023:22:53:30 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.46.143 - - [30/Jan/2023:22:53:31 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.46.143 - - [30/Jan/2023:22:53:32 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.46.143 - - [30/Jan/2023:22:53:33 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.46.143 - - [30/Jan/2023:22:53:34 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.46.143 - - [30/Jan/2023:22:53:35 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.46.143 - - [30/Jan/2023:22:53:36 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.46.143 - - [30/Jan/2023:22:53:37 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.46.143 - - [30/Jan/2023:22:53:38 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.46.143 - - [30/Jan/2023:22:53:39 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.46.143 - - [30/Jan/2023:22:53:40 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.46.143 - - [30/Jan/2023:22:53:41 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.46.143 - - [30/Jan/2023:22:53:42 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.46.143 - - [30/Jan/2023:22:53:43 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.46.143 - - [30/Jan/2023:22:53:44 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.46.143 - - [30/Jan/2023:22:53:45 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.46.143 - - [30/Jan/2023:22:53:46 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.46.143 - - [30/Jan/2023:22:53:47 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.46.143 - - [30/Jan/2023:22:53:48 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.46.143 - - [30/Jan/2023:22:53:49 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.46.143 - - [30/Jan/2023:22:53:50 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.46.143 - - [30/Jan/2023:22:53:51 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.46.143 - - [30/Jan/2023:22:53:52 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.46.143 - - [30/Jan/2023:22:53:53 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.46.143 - - [30/Jan/2023:22:53:54 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.46.143 - - [30/Jan/2023:22:53:55 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.46.143 - - [30/Jan/2023:22:53:56 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.46.143 - - [30/Jan/2023:22:53:57 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.46.143 - - [30/Jan/2023:22:53:58 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.46.143 - - [30/Jan/2023:22:53:59 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.46.143 - - [30/Jan/2023:22:54:00 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.46.143 - - [30/Jan/2023:22:54:01 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.46.143 - - [30/Jan/2023:22:54:02 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.46.143 - - [30/Jan/2023:22:54:03 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.46.143 - - [30/Jan/2023:22:54:04 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.46.143 - - [30/Jan/2023:22:54:05 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.46.143 - - [30/Jan/2023:22:54:06 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.46.143 - - [30/Jan/2023:22:54:07 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.46.143 - - [30/Jan/2023:22:54:08 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.46.143 - - [30/Jan/2023:22:54:09 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.46.143 - - [30/Jan/2023:22:54:10 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.46.143 - - [30/Jan/2023:22:54:11 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.46.143 - - [30/Jan/2023:22:54:12 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.46.143 - - [30/Jan/2023:22:54:13 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.46.143 - - [30/Jan/2023:22:54:14 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.46.143 - - [30/Jan/2023:22:54:15 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.46.143 - - [30/Jan/2023:22:54:16 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.46.143 - - [30/Jan/2023:22:54:17 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.46.143 - - [30/Jan/2023:22:54:18 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.46.143 - - [30/Jan/2023:22:54:19 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.46.143 - - [30/Jan/2023:22:54:20 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.46.143 - - [30/Jan/2023:22:54:21 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.46.143 - - [30/Jan/2023:22:54:22 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.46.143 - - [30/Jan/2023:22:54:23 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.46.143 - - [30/Jan/2023:22:54:24 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.46.143 - - [30/Jan/2023:22:54:25 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.46.143 - - [30/Jan/2023:22:54:26 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.46.143 - - [30/Jan/2023:22:54:27 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.46.143 - - [30/Jan/2023:22:54:28 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.46.143 - - [30/Jan/2023:22:54:29 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.46.143 - - [30/Jan/2023:22:54:30 +0000] "GET /index.html HTTP/1.1" 200 45 Jan 30 22:54:31.366: INFO: Deleting all statefulset in ns statefulset-9857 Jan 30 22:54:31.508: INFO: Scaling statefulset ss2 to 0 Jan 30 22:54:42.081: INFO: Waiting for statefulset status.replicas updated to 0 Jan 30 22:54:42.223: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 �[1mSTEP�[0m: Collecting events from namespace "statefulset-9857". �[1mSTEP�[0m: Found 52 events. Jan 30 22:54:42.934: INFO: At 2023-01-30 22:42:49 +0000 UTC - event for ss2: {statefulset-controller } SuccessfulCreate: create Pod ss2-0 in StatefulSet ss2 successful Jan 30 22:54:42.934: INFO: At 2023-01-30 22:42:49 +0000 UTC - event for ss2-0: {default-scheduler } Scheduled: Successfully assigned statefulset-9857/ss2-0 to ip-172-20-56-33.sa-east-1.compute.internal Jan 30 22:54:42.934: INFO: At 2023-01-30 22:42:51 +0000 UTC - event for ss2-0: {kubelet ip-172-20-56-33.sa-east-1.compute.internal} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/httpd:2.4.38-2" Jan 30 22:54:42.934: INFO: At 2023-01-30 22:43:15 +0000 UTC - event for ss2-0: {kubelet ip-172-20-56-33.sa-east-1.compute.internal} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/httpd:2.4.38-2" in 8.665137788s (23.804626676s including waiting) Jan 30 22:54:42.934: INFO: At 2023-01-30 22:43:15 +0000 UTC - event for ss2-0: {kubelet ip-172-20-56-33.sa-east-1.compute.internal} Created: Created container webserver Jan 30 22:54:42.934: INFO: At 2023-01-30 22:43:15 +0000 UTC - event for ss2-0: {kubelet ip-172-20-56-33.sa-east-1.compute.internal} Started: Started container webserver Jan 30 22:54:42.934: INFO: At 2023-01-30 22:43:17 +0000 UTC - event for ss2: {statefulset-controller } SuccessfulCreate: create Pod ss2-1 in StatefulSet ss2 successful Jan 30 22:54:42.934: INFO: At 2023-01-30 22:43:17 +0000 UTC - event for ss2-1: {default-scheduler } Scheduled: Successfully assigned statefulset-9857/ss2-1 to ip-172-20-63-7.sa-east-1.compute.internal Jan 30 22:54:42.934: INFO: At 2023-01-30 22:43:18 +0000 UTC - event for ss2-1: {kubelet ip-172-20-63-7.sa-east-1.compute.internal} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/httpd:2.4.38-2" Jan 30 22:54:42.934: INFO: At 2023-01-30 22:43:39 +0000 UTC - event for ss2-1: {kubelet ip-172-20-63-7.sa-east-1.compute.internal} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/httpd:2.4.38-2" in 8.20966655s (20.543034956s including waiting) Jan 30 22:54:42.934: INFO: At 2023-01-30 22:43:39 +0000 UTC - event for ss2-1: {kubelet ip-172-20-63-7.sa-east-1.compute.internal} Created: Created container webserver Jan 30 22:54:42.934: INFO: At 2023-01-30 22:43:39 +0000 UTC - event for ss2-1: {kubelet ip-172-20-63-7.sa-east-1.compute.internal} Started: Started container webserver Jan 30 22:54:42.934: INFO: At 2023-01-30 22:43:40 +0000 UTC - event for ss2: {statefulset-controller } SuccessfulCreate: create Pod ss2-2 in StatefulSet ss2 successful Jan 30 22:54:42.934: INFO: At 2023-01-30 22:43:40 +0000 UTC - event for ss2-2: {default-scheduler } Scheduled: Successfully assigned statefulset-9857/ss2-2 to ip-172-20-46-143.sa-east-1.compute.internal Jan 30 22:54:42.934: INFO: At 2023-01-30 22:43:43 +0000 UTC - event for ss2-2: {kubelet ip-172-20-46-143.sa-east-1.compute.internal} Started: Started container webserver Jan 30 22:54:42.934: INFO: At 2023-01-30 22:43:43 +0000 UTC - event for ss2-2: {kubelet ip-172-20-46-143.sa-east-1.compute.internal} Created: Created container webserver Jan 30 22:54:42.934: INFO: At 2023-01-30 22:43:43 +0000 UTC - event for ss2-2: {kubelet ip-172-20-46-143.sa-east-1.compute.internal} Pulled: Container image "k8s.gcr.io/e2e-test-images/httpd:2.4.38-2" already present on machine Jan 30 22:54:42.934: INFO: At 2023-01-30 22:44:02 +0000 UTC - event for ss2-1: {kubelet ip-172-20-63-7.sa-east-1.compute.internal} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 404 Jan 30 22:54:42.934: INFO: At 2023-01-30 22:44:15 +0000 UTC - event for ss2: {statefulset-controller } SuccessfulDelete: delete Pod ss2-2 in StatefulSet ss2 successful Jan 30 22:54:42.934: INFO: At 2023-01-30 22:44:15 +0000 UTC - event for ss2-2: {kubelet ip-172-20-46-143.sa-east-1.compute.internal} Killing: Stopping container webserver Jan 30 22:54:42.934: INFO: At 2023-01-30 22:44:16 +0000 UTC - event for ss2-2: {default-scheduler } Scheduled: Successfully assigned statefulset-9857/ss2-2 to ip-172-20-46-143.sa-east-1.compute.internal Jan 30 22:54:42.934: INFO: At 2023-01-30 22:44:18 +0000 UTC - event for ss2-2: {kubelet ip-172-20-46-143.sa-east-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "8291a652619cebddbdc089ecc49cb6e64783aa7ab93a7d1b206e697587fe76f5" network for pod "ss2-2": networkPlugin cni failed to set up pod "ss2-2_statefulset-9857" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 30 22:54:42.934: INFO: At 2023-01-30 22:44:19 +0000 UTC - event for ss2-2: {kubelet ip-172-20-46-143.sa-east-1.compute.internal} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 22:54:42.934: INFO: At 2023-01-30 22:44:21 +0000 UTC - event for ss2-2: {kubelet ip-172-20-46-143.sa-east-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "c6b0ff021cd2b5e7271c758916e5081ac897202ede4bca75e860e8fc756e5456" network for pod "ss2-2": networkPlugin cni failed to set up pod "ss2-2_statefulset-9857" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 30 22:54:42.934: INFO: At 2023-01-30 22:44:24 +0000 UTC - event for ss2-2: {kubelet ip-172-20-46-143.sa-east-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "f9de173792d402985820092fc54c37b6ea724792d6fb69cd71b7fcf83cdf2593" network for pod "ss2-2": networkPlugin cni failed to set up pod "ss2-2_statefulset-9857" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 30 22:54:42.934: INFO: At 2023-01-30 22:44:28 +0000 UTC - event for ss2-2: {kubelet ip-172-20-46-143.sa-east-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "23e3ea6d8be97cc116265e70d41721e0848d0e27c875a7b3e9cfaded2b05c3f7" network for pod "ss2-2": networkPlugin cni failed to set up pod "ss2-2_statefulset-9857" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 30 22:54:42.934: INFO: At 2023-01-30 22:44:31 +0000 UTC - event for ss2-2: {kubelet ip-172-20-46-143.sa-east-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "0a1a4df4012fcc8d3f0b7716c3338a4bb7245100c950bdf37779ad3724b4a313" network for pod "ss2-2": networkPlugin cni failed to set up pod "ss2-2_statefulset-9857" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 30 22:54:42.934: INFO: At 2023-01-30 22:44:33 +0000 UTC - event for ss2-2: {kubelet ip-172-20-46-143.sa-east-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "a297c69e3bfebf7127e0623a1c71903e447af12f5a14b75d2a501e086bbf4e95" network for pod "ss2-2": networkPlugin cni failed to set up pod "ss2-2_statefulset-9857" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 30 22:54:42.934: INFO: At 2023-01-30 22:44:37 +0000 UTC - event for ss2-2: {kubelet ip-172-20-46-143.sa-east-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "dd9bed58ff77f57301f7d6c6824101c71deb718bb688edc2b595026aaab70925" network for pod "ss2-2": networkPlugin cni failed to set up pod "ss2-2_statefulset-9857" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 30 22:54:42.934: INFO: At 2023-01-30 22:44:41 +0000 UTC - event for ss2-2: {kubelet ip-172-20-46-143.sa-east-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "5011860bbf840ec6765430104142be6529b2b34970fae3c86d8694ccd2721cac" network for pod "ss2-2": networkPlugin cni failed to set up pod "ss2-2_statefulset-9857" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 30 22:54:42.934: INFO: At 2023-01-30 22:44:44 +0000 UTC - event for ss2-2: {kubelet ip-172-20-46-143.sa-east-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "ae9bc458abb7e0759a41caacb695bfc9d0acb922945a5e014d46fc7038e18cf7" network for pod "ss2-2": networkPlugin cni failed to set up pod "ss2-2_statefulset-9857" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 30 22:54:42.934: INFO: At 2023-01-30 22:44:46 +0000 UTC - event for ss2-2: {kubelet ip-172-20-46-143.sa-east-1.compute.internal} FailedCreatePodSandBox: (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "c3c7d919ed5b75bc71fb15eafac5b0c2861ea72f6940d2ce0f0a68a021df4674" network for pod "ss2-2": networkPlugin cni failed to set up pod "ss2-2_statefulset-9857" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 30 22:54:42.934: INFO: At 2023-01-30 22:45:14 +0000 UTC - event for ss2: {statefulset-controller } SuccessfulDelete: delete Pod ss2-1 in StatefulSet ss2 successful Jan 30 22:54:42.934: INFO: At 2023-01-30 22:45:14 +0000 UTC - event for ss2-1: {kubelet ip-172-20-63-7.sa-east-1.compute.internal} Killing: Stopping container webserver Jan 30 22:54:42.934: INFO: At 2023-01-30 22:45:15 +0000 UTC - event for ss2-1: {kubelet ip-172-20-63-7.sa-east-1.compute.internal} Unhealthy: Readiness probe failed: Get "http://172.20.36.20:80/index.html": dial tcp 172.20.36.20:80: connect: connection refused Jan 30 22:54:42.934: INFO: At 2023-01-30 22:45:17 +0000 UTC - event for ss2-1: {kubelet ip-172-20-63-7.sa-east-1.compute.internal} Unhealthy: Readiness probe failed: Get "http://172.20.36.20:80/index.html": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 30 22:54:42.934: INFO: At 2023-01-30 22:45:23 +0000 UTC - event for ss2-1: {default-scheduler } Scheduled: Successfully assigned statefulset-9857/ss2-1 to ip-172-20-63-7.sa-east-1.compute.internal Jan 30 22:54:42.934: INFO: At 2023-01-30 22:45:26 +0000 UTC - event for ss2-1: {kubelet ip-172-20-63-7.sa-east-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "0ea7dfed61ef9431b761794ecf8be5cd0d23de6ede1afa6fa4a4471c42ca96f3" network for pod "ss2-1": networkPlugin cni failed to set up pod "ss2-1_statefulset-9857" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 30 22:54:42.934: INFO: At 2023-01-30 22:45:27 +0000 UTC - event for ss2-1: {kubelet ip-172-20-63-7.sa-east-1.compute.internal} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 22:54:42.934: INFO: At 2023-01-30 22:45:31 +0000 UTC - event for ss2-1: {kubelet ip-172-20-63-7.sa-east-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "49325abc3b6858ab381ada62278b1d37739f462da14a16d5c0a58460260f8e69" network for pod "ss2-1": networkPlugin cni failed to set up pod "ss2-1_statefulset-9857" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 30 22:54:42.934: INFO: At 2023-01-30 22:45:35 +0000 UTC - event for ss2-1: {kubelet ip-172-20-63-7.sa-east-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "6e0aa29ac64bb88b83f0a15e4667cce5ad4acc7e1d28b36f1cd90a3542c11043" network for pod "ss2-1": networkPlugin cni failed to set up pod "ss2-1_statefulset-9857" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 30 22:54:42.934: INFO: At 2023-01-30 22:45:42 +0000 UTC - event for ss2-1: {kubelet ip-172-20-63-7.sa-east-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "78f6564e4d4cfe2e789940f40fe5a554b96f96d881d0119a8f460102eb955a40" network for pod "ss2-1": networkPlugin cni failed to set up pod "ss2-1_statefulset-9857" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 30 22:54:42.934: INFO: At 2023-01-30 22:45:48 +0000 UTC - event for ss2-1: {kubelet ip-172-20-63-7.sa-east-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "f2f5d079e26214bec545e7a41f389ca6357c55ce6cf12a55cd404eb2c0d46a38" network for pod "ss2-1": networkPlugin cni failed to set up pod "ss2-1_statefulset-9857" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 30 22:54:42.934: INFO: At 2023-01-30 22:45:52 +0000 UTC - event for ss2-1: {kubelet ip-172-20-63-7.sa-east-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "3a96ce780780711980403982cac32ca7d2ad3948204db43fc2424174ee866ece" network for pod "ss2-1": networkPlugin cni failed to set up pod "ss2-1_statefulset-9857" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 30 22:54:42.934: INFO: At 2023-01-30 22:45:56 +0000 UTC - event for ss2-1: {kubelet ip-172-20-63-7.sa-east-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "cf20f35974be4121798a2721a2b8a307c769184d4b561a640a4bd3e7b372b0bb" network for pod "ss2-1": networkPlugin cni failed to set up pod "ss2-1_statefulset-9857" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 30 22:54:42.934: INFO: At 2023-01-30 22:46:01 +0000 UTC - event for ss2-1: {kubelet ip-172-20-63-7.sa-east-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "ddae6478c7cd0d897a3268e09c589b6632e0f73c18f8d96a18436edaa1c4e830" network for pod "ss2-1": networkPlugin cni failed to set up pod "ss2-1_statefulset-9857" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 30 22:54:42.934: INFO: At 2023-01-30 22:46:06 +0000 UTC - event for ss2-1: {kubelet ip-172-20-63-7.sa-east-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "32654514be3c3604089d9ab7cf3da0c1d050442b1f09ade0dd97b52912189752" network for pod "ss2-1": networkPlugin cni failed to set up pod "ss2-1_statefulset-9857" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 30 22:54:42.934: INFO: At 2023-01-30 22:46:12 +0000 UTC - event for ss2-1: {kubelet ip-172-20-63-7.sa-east-1.compute.internal} FailedCreatePodSandBox: (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "772c26e4e7840df628779a752dbe79f18a67cb2de6f180dc617ba9fa5ac90a6c" network for pod "ss2-1": networkPlugin cni failed to set up pod "ss2-1_statefulset-9857" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 30 22:54:42.934: INFO: At 2023-01-30 22:54:31 +0000 UTC - event for ss2-2: {kubelet ip-172-20-46-143.sa-east-1.compute.internal} Killing: Stopping container webserver Jan 30 22:54:42.934: INFO: At 2023-01-30 22:54:31 +0000 UTC - event for ss2-2: {kubelet ip-172-20-46-143.sa-east-1.compute.internal} Unhealthy: Readiness probe failed: Get "http://172.20.35.178:80/index.html": dial tcp 172.20.35.178:80: connect: connection refused Jan 30 22:54:42.934: INFO: At 2023-01-30 22:54:38 +0000 UTC - event for ss2: {statefulset-controller } SuccessfulDelete: delete Pod ss2-0 in StatefulSet ss2 successful Jan 30 22:54:42.934: INFO: At 2023-01-30 22:54:38 +0000 UTC - event for ss2-0: {kubelet ip-172-20-56-33.sa-east-1.compute.internal} Killing: Stopping container webserver Jan 30 22:54:43.076: INFO: POD NODE PHASE GRACE CONDITIONS Jan 30 22:54:43.076: INFO: Jan 30 22:54:43.219: INFO: Logging node info for node ip-172-20-37-244.sa-east-1.compute.internal Jan 30 22:54:43.361: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-37-244.sa-east-1.compute.internal 1be0c21f-5cd5-49c3-937b-dcb7d30e890a 10033 0 2023-01-30 22:39:08 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:sa-east-1 failure-domain.beta.kubernetes.io/zone:sa-east-1a kubelet_cleanup:true kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-37-244.sa-east-1.compute.internal kubernetes.io/os:linux kubernetes.io/role:node node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:sa-east-1a topology.hostpath.csi/node:ip-172-20-37-244.sa-east-1.compute.internal topology.kubernetes.io/region:sa-east-1 topology.kubernetes.io/zone:sa-east-1a] map[csi.volume.kubernetes.io/nodeid:{"ebs.csi.aws.com":"i-02de6750f6f07da4c"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kops-controller Update v1 2023-01-30 22:39:08 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/node":{}}}} } {kube-controller-manager Update v1 2023-01-30 22:39:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.4.0/24\"":{}}}} } {kubelet Update v1 2023-01-30 22:39:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {e2e.test Update v1 2023-01-30 22:43:41 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubelet_cleanup":{}}}} } {kubelet Update v1 2023-01-30 22:49:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.4.0/24,DoNotUseExternalID:,ProviderID:aws:///sa-east-1a/i-02de6750f6f07da4c,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{49753808896 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4051644416 0} {<nil>} 3956684Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{44778427933 0} {<nil>} 44778427933 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3946786816 0} {<nil>} 3854284Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-30 22:54:18 +0000 UTC,LastTransitionTime:2023-01-30 22:39:08 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-30 22:54:18 +0000 UTC,LastTransitionTime:2023-01-30 22:39:08 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-30 22:54:18 +0000 UTC,LastTransitionTime:2023-01-30 22:39:08 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-30 22:54:18 +0000 UTC,LastTransitionTime:2023-01-30 22:39:28 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.37.244,},NodeAddress{Type:ExternalIP,Address:54.232.162.137,},NodeAddress{Type:Hostname,Address:ip-172-20-37-244.sa-east-1.compute.internal,},NodeAddress{Type:InternalDNS,Address:ip-172-20-37-244.sa-east-1.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-54-232-162-137.sa-east-1.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec2350fb0335a8c0068ce4bddeab7362,SystemUUID:ec2350fb-0335-a8c0-068c-e4bddeab7362,BootID:80522224-50f0-4d12-bc36-a8ad10d0e9d2,KernelVersion:5.15.0-1028-aws,OSImage:Ubuntu 22.04.1 LTS,ContainerRuntimeVersion:docker://20.10.17,KubeletVersion:v1.23.16,KubeProxyVersion:v1.23.16,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5 quay.io/cilium/cilium:v1.12.5],SizeBytes:456569528,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e k8s.gcr.io/e2e-test-images/agnhost:2.39],SizeBytes:126872991,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.23.16 registry.k8s.io/kube-proxy-amd64:v1.23.16],SizeBytes:110832791,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:f0c5de192d832e7c1daa6580d4a62e8fa6fc8eabc0917ae4cb7ed4d15e95b59e registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.1],SizeBytes:94471559,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:4e74c0492bceddc598de1c90cc5bc14dcda94cb49fa9c5bad9d117c4834b5e08 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.1],SizeBytes:56379269,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:36c31f7e1f433c9634d24f876353e8646246d81a03c4e351202c2644daff1620 k8s.gcr.io/sig-storage/csi-resizer:v1.2.0],SizeBytes:53956308,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-snapshotter@sha256:ed98431376c9e944e19a465fe8ea944806714dd95416a0821096c78d66b579bd k8s.gcr.io/sig-storage/csi-snapshotter:v4.1.1],SizeBytes:53531990,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:60ab9b3e6a030d3038c87c0d6bca2930f58d1d72823e6a4af09767dc83b696a2 k8s.gcr.io/sig-storage/csi-attacher:v3.2.1],SizeBytes:53499570,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:50249337,},ContainerImage{Names:[k8s.gcr.io/sig-storage/hostpathplugin@sha256:232fe80174d60d520d36043103853a1d7ab4b7f3782cf43e45034f04ccda58ce k8s.gcr.io/sig-storage/hostpathplugin:v1.7.1],SizeBytes:29977921,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:19582112,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:2dee3fe5fe861bb66c3a4ac51114f3447a4cd35870e0f2e2b558c7a400d89589 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0],SizeBytes:18699436,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:17611980,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:529be2c9770add0cdd0c989115222ea9fc1be430c11095eb9f6dafcf98a36e2b k8s.gcr.io/sig-storage/livenessprobe:v2.4.0],SizeBytes:17175488,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:1154361,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6 registry.k8s.io/pause:3.6],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 30 22:54:43.362: INFO: Logging kubelet events for node ip-172-20-37-244.sa-east-1.compute.internal Jan 30 22:54:43.506: INFO: Logging pods the kubelet thinks is on node ip-172-20-37-244.sa-east-1.compute.internal Jan 30 22:54:43.799: INFO: ss2-1 started at 2023-01-30 22:43:37 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:43.799: INFO: Container webserver ready: true, restart count 0 Jan 30 22:54:43.799: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-5jrwl started at 2023-01-30 22:43:42 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:43.799: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:54:43.799: INFO: hostexec-ip-172-20-37-244.sa-east-1.compute.internal-mvnww started at 2023-01-30 22:54:13 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:43.799: INFO: Container agnhost-container ready: true, restart count 0 Jan 30 22:54:43.799: INFO: ebs-csi-node-wwnfq started at 2023-01-30 22:39:09 +0000 UTC (0+3 container statuses recorded) Jan 30 22:54:43.799: INFO: Container ebs-plugin ready: true, restart count 0 Jan 30 22:54:43.799: INFO: Container liveness-probe ready: true, restart count 0 Jan 30 22:54:43.799: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 30 22:54:43.799: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-jxx2t started at 2023-01-30 22:43:42 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:43.799: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:54:43.799: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-5qz9q started at 2023-01-30 22:43:42 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:43.799: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:54:43.799: INFO: cilium-2kmmh started at 2023-01-30 22:39:09 +0000 UTC (1+1 container statuses recorded) Jan 30 22:54:43.799: INFO: Init container clean-cilium-state ready: true, restart count 0 Jan 30 22:54:43.799: INFO: Container cilium-agent ready: true, restart count 0 Jan 30 22:54:43.799: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-jvpvp started at 2023-01-30 22:43:42 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:43.799: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:54:43.799: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-xbf2p started at 2023-01-30 22:43:43 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:43.799: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:54:43.799: INFO: rs-hh4qw started at 2023-01-30 22:49:21 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:43.799: INFO: Container donothing ready: false, restart count 0 Jan 30 22:54:43.799: INFO: hostexec-ip-172-20-37-244.sa-east-1.compute.internal-5pprs started at 2023-01-30 22:50:34 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:43.799: INFO: Container agnhost-container ready: true, restart count 0 Jan 30 22:54:43.799: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-g7dxk started at 2023-01-30 22:43:42 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:43.799: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:54:43.799: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-x4sjr started at 2023-01-30 22:43:43 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:43.799: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:54:43.799: INFO: csi-hostpathplugin-0 started at 2023-01-30 22:54:32 +0000 UTC (0+7 container statuses recorded) Jan 30 22:54:43.799: INFO: Container csi-attacher ready: false, restart count 0 Jan 30 22:54:43.799: INFO: Container csi-provisioner ready: false, restart count 0 Jan 30 22:54:43.799: INFO: Container csi-resizer ready: false, restart count 0 Jan 30 22:54:43.799: INFO: Container csi-snapshotter ready: false, restart count 0 Jan 30 22:54:43.799: INFO: Container hostpath ready: false, restart count 0 Jan 30 22:54:43.799: INFO: Container liveness-probe ready: false, restart count 0 Jan 30 22:54:43.799: INFO: Container node-driver-registrar ready: false, restart count 0 Jan 30 22:54:43.799: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-7jbrf started at 2023-01-30 22:43:43 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:43.799: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:54:43.799: INFO: pod-subpath-test-preprovisionedpv-zc5t started at 2023-01-30 22:50:47 +0000 UTC (1+2 container statuses recorded) Jan 30 22:54:43.799: INFO: Init container test-init-subpath-preprovisionedpv-zc5t ready: false, restart count 0 Jan 30 22:54:43.799: INFO: Container test-container-subpath-preprovisionedpv-zc5t ready: false, restart count 0 Jan 30 22:54:43.799: INFO: Container test-container-volume-preprovisionedpv-zc5t ready: false, restart count 0 Jan 30 22:54:43.799: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-tvmsg started at 2023-01-30 22:43:42 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:43.799: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:54:43.799: INFO: pod-subpath-test-preprovisionedpv-9kt4 started at 2023-01-30 22:54:31 +0000 UTC (2+1 container statuses recorded) Jan 30 22:54:43.799: INFO: Init container init-volume-preprovisionedpv-9kt4 ready: false, restart count 0 Jan 30 22:54:43.799: INFO: Init container test-init-volume-preprovisionedpv-9kt4 ready: false, restart count 0 Jan 30 22:54:43.799: INFO: Container test-container-subpath-preprovisionedpv-9kt4 ready: false, restart count 0 Jan 30 22:54:43.799: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-vpcj2 started at 2023-01-30 22:43:43 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:43.799: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:54:43.799: INFO: coredns-867df8f45c-q48mf started at 2023-01-30 22:39:48 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:43.799: INFO: Container coredns ready: true, restart count 0 Jan 30 22:54:44.464: INFO: Latency metrics for node ip-172-20-37-244.sa-east-1.compute.internal Jan 30 22:54:44.464: INFO: Logging node info for node ip-172-20-46-143.sa-east-1.compute.internal Jan 30 22:54:44.607: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-46-143.sa-east-1.compute.internal 4ac0f2fd-a06b-4650-9c4a-c2964727bf42 9940 0 2023-01-30 22:39:07 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:sa-east-1 failure-domain.beta.kubernetes.io/zone:sa-east-1a kubelet_cleanup:true kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-46-143.sa-east-1.compute.internal kubernetes.io/os:linux kubernetes.io/role:node node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:sa-east-1a topology.kubernetes.io/region:sa-east-1 topology.kubernetes.io/zone:sa-east-1a] map[csi.volume.kubernetes.io/nodeid:{"ebs.csi.aws.com":"i-0549a01609c77b117"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kops-controller Update v1 2023-01-30 22:39:07 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/node":{}}}} } {kube-controller-manager Update v1 2023-01-30 22:39:07 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.3.0/24\"":{}}}} } {kubelet Update v1 2023-01-30 22:39:07 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {e2e.test Update v1 2023-01-30 22:43:41 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubelet_cleanup":{}}}} } {kubelet Update v1 2023-01-30 22:54:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.3.0/24,DoNotUseExternalID:,ProviderID:aws:///sa-east-1a/i-0549a01609c77b117,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{49753808896 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4051652608 0} {<nil>} 3956692Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{44778427933 0} {<nil>} 44778427933 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3946795008 0} {<nil>} 3854292Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-30 22:54:09 +0000 UTC,LastTransitionTime:2023-01-30 22:39:07 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-30 22:54:09 +0000 UTC,LastTransitionTime:2023-01-30 22:39:07 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-30 22:54:09 +0000 UTC,LastTransitionTime:2023-01-30 22:39:07 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-30 22:54:09 +0000 UTC,LastTransitionTime:2023-01-30 22:39:28 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.46.143,},NodeAddress{Type:ExternalIP,Address:18.230.23.25,},NodeAddress{Type:Hostname,Address:ip-172-20-46-143.sa-east-1.compute.internal,},NodeAddress{Type:InternalDNS,Address:ip-172-20-46-143.sa-east-1.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-18-230-23-25.sa-east-1.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec25bac6007d23dab6609e76a6663500,SystemUUID:ec25bac6-007d-23da-b660-9e76a6663500,BootID:cd72b157-4d78-4df9-997f-bab559376690,KernelVersion:5.15.0-1028-aws,OSImage:Ubuntu 22.04.1 LTS,ContainerRuntimeVersion:docker://20.10.17,KubeletVersion:v1.23.16,KubeProxyVersion:v1.23.16,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5 quay.io/cilium/cilium:v1.12.5],SizeBytes:456569528,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:20f25f275d46aa728f7615a1ccc19c78b2ed89435bf943a44b339f70f45508e6 k8s.gcr.io/e2e-test-images/httpd:2.4.39-2],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e k8s.gcr.io/e2e-test-images/agnhost:2.39],SizeBytes:126872991,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.23.16 registry.k8s.io/kube-proxy-amd64:v1.23.16],SizeBytes:110832791,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:f0c5de192d832e7c1daa6580d4a62e8fa6fc8eabc0917ae4cb7ed4d15e95b59e registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.1],SizeBytes:94471559,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:19582112,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:17611980,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 k8s.gcr.io/e2e-test-images/nginx:1.14-2],SizeBytes:16032814,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:1154361,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6 registry.k8s.io/pause:3.6],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 30 22:54:44.608: INFO: Logging kubelet events for node ip-172-20-46-143.sa-east-1.compute.internal Jan 30 22:54:44.753: INFO: Logging pods the kubelet thinks is on node ip-172-20-46-143.sa-east-1.compute.internal Jan 30 22:54:44.903: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-2g6gh started at 2023-01-30 22:43:42 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:44.903: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:54:44.903: INFO: hostexec-ip-172-20-46-143.sa-east-1.compute.internal-g66zc started at 2023-01-30 22:49:26 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:44.903: INFO: Container agnhost-container ready: true, restart count 0 Jan 30 22:54:44.903: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-8w4h2 started at 2023-01-30 22:43:42 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:44.903: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:54:44.903: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-4cqsj started at 2023-01-30 22:43:43 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:44.903: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:54:44.903: INFO: adopt-release-qqtpc started at 2023-01-30 22:49:55 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:44.903: INFO: Container c ready: true, restart count 0 Jan 30 22:54:44.903: INFO: adopt-release-rpjrs started at 2023-01-30 22:49:55 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:44.903: INFO: Container c ready: false, restart count 0 Jan 30 22:54:44.903: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-dw5bz started at 2023-01-30 22:43:42 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:44.903: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:54:44.903: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-jh598 started at 2023-01-30 22:43:42 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:44.903: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:54:44.903: INFO: ss2-0 started at 2023-01-30 22:46:20 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:44.903: INFO: Container webserver ready: false, restart count 0 Jan 30 22:54:44.903: INFO: hostexec-ip-172-20-46-143.sa-east-1.compute.internal-s8w47 started at 2023-01-30 22:48:52 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:44.903: INFO: Container agnhost-container ready: true, restart count 0 Jan 30 22:54:44.903: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-2sw7v started at 2023-01-30 22:43:43 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:44.903: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:54:44.903: INFO: rs-8d5pg started at 2023-01-30 22:49:21 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:44.903: INFO: Container donothing ready: true, restart count 0 Jan 30 22:54:44.903: INFO: pod-secrets-025721a8-1f1a-425c-b117-8841c9b333cd started at 2023-01-30 22:49:58 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:44.903: INFO: Container secret-volume-test ready: false, restart count 0 Jan 30 22:54:44.903: INFO: pod-subpath-test-preprovisionedpv-z9fs started at 2023-01-30 22:50:49 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:44.903: INFO: Container test-container-subpath-preprovisionedpv-z9fs ready: false, restart count 0 Jan 30 22:54:44.903: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-q6pfk started at 2023-01-30 22:43:42 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:44.904: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:54:44.904: INFO: cilium-m624g started at 2023-01-30 22:39:08 +0000 UTC (1+1 container statuses recorded) Jan 30 22:54:44.904: INFO: Init container clean-cilium-state ready: true, restart count 0 Jan 30 22:54:44.904: INFO: Container cilium-agent ready: true, restart count 0 Jan 30 22:54:44.904: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-bxpzz started at 2023-01-30 22:43:42 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:44.904: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:54:44.904: INFO: csi-hostpathplugin-0 started at 2023-01-30 22:54:22 +0000 UTC (0+7 container statuses recorded) Jan 30 22:54:44.904: INFO: Container csi-attacher ready: false, restart count 0 Jan 30 22:54:44.904: INFO: Container csi-provisioner ready: false, restart count 0 Jan 30 22:54:44.904: INFO: Container csi-resizer ready: false, restart count 0 Jan 30 22:54:44.904: INFO: Container csi-snapshotter ready: false, restart count 0 Jan 30 22:54:44.904: INFO: Container hostpath ready: false, restart count 0 Jan 30 22:54:44.904: INFO: Container liveness-probe ready: false, restart count 0 Jan 30 22:54:44.904: INFO: Container node-driver-registrar ready: false, restart count 0 Jan 30 22:54:44.904: INFO: ebs-csi-node-qjvfh started at 2023-01-30 22:39:08 +0000 UTC (0+3 container statuses recorded) Jan 30 22:54:44.904: INFO: Container ebs-plugin ready: true, restart count 0 Jan 30 22:54:44.904: INFO: Container liveness-probe ready: true, restart count 0 Jan 30 22:54:44.904: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 30 22:54:44.904: INFO: busybox-host-aliases4c49ce25-2bdd-4be9-8511-41e5a85d0929 started at 2023-01-30 22:50:17 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:44.904: INFO: Container busybox-host-aliases4c49ce25-2bdd-4be9-8511-41e5a85d0929 ready: false, restart count 0 Jan 30 22:54:44.904: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-pt29g started at 2023-01-30 22:43:42 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:44.904: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:54:44.904: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-2sx9d started at 2023-01-30 22:43:43 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:44.904: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:54:44.904: INFO: pod-4f1d4caa-3b55-4d16-b486-82f59f49f567 started at 2023-01-30 22:50:54 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:44.904: INFO: Container test-container ready: false, restart count 0 Jan 30 22:54:45.763: INFO: Latency metrics for node ip-172-20-46-143.sa-east-1.compute.internal Jan 30 22:54:45.763: INFO: Logging node info for node ip-172-20-56-33.sa-east-1.compute.internal Jan 30 22:54:45.905: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-56-33.sa-east-1.compute.internal 954986f9-8a0c-45d3-a91c-b10fd929b91d 10115 0 2023-01-30 22:39:06 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:sa-east-1 failure-domain.beta.kubernetes.io/zone:sa-east-1a kubelet_cleanup:true kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-56-33.sa-east-1.compute.internal kubernetes.io/os:linux kubernetes.io/role:node node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:sa-east-1a topology.kubernetes.io/region:sa-east-1 topology.kubernetes.io/zone:sa-east-1a] map[csi.volume.kubernetes.io/nodeid:{"ebs.csi.aws.com":"i-09e0b8ffb97d8ede2"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kops-controller Update v1 2023-01-30 22:39:06 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/node":{}}}} } {kubelet Update v1 2023-01-30 22:39:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-30 22:39:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.1.0/24\"":{}}}} } {e2e.test Update v1 2023-01-30 22:43:42 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubelet_cleanup":{}}}} } {kubelet Update v1 2023-01-30 22:44:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.1.0/24,DoNotUseExternalID:,ProviderID:aws:///sa-east-1a/i-09e0b8ffb97d8ede2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{49753808896 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4051644416 0} {<nil>} 3956684Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{44778427933 0} {<nil>} 44778427933 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3946786816 0} {<nil>} 3854284Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-30 22:54:25 +0000 UTC,LastTransitionTime:2023-01-30 22:39:06 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-30 22:54:25 +0000 UTC,LastTransitionTime:2023-01-30 22:39:06 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-30 22:54:25 +0000 UTC,LastTransitionTime:2023-01-30 22:39:06 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-30 22:54:25 +0000 UTC,LastTransitionTime:2023-01-30 22:39:26 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.56.33,},NodeAddress{Type:ExternalIP,Address:54.233.226.185,},NodeAddress{Type:Hostname,Address:ip-172-20-56-33.sa-east-1.compute.internal,},NodeAddress{Type:InternalDNS,Address:ip-172-20-56-33.sa-east-1.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-54-233-226-185.sa-east-1.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec2d7bffa4e33f064a7a3db7aac73580,SystemUUID:ec2d7bff-a4e3-3f06-4a7a-3db7aac73580,BootID:749c0ee0-ccbf-48a5-9702-baf2673813b3,KernelVersion:5.15.0-1028-aws,OSImage:Ubuntu 22.04.1 LTS,ContainerRuntimeVersion:docker://20.10.17,KubeletVersion:v1.23.16,KubeProxyVersion:v1.23.16,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5 quay.io/cilium/cilium:v1.12.5],SizeBytes:456569528,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e k8s.gcr.io/e2e-test-images/agnhost:2.39],SizeBytes:126872991,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.23.16 registry.k8s.io/kube-proxy-amd64:v1.23.16],SizeBytes:110832791,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:f0c5de192d832e7c1daa6580d4a62e8fa6fc8eabc0917ae4cb7ed4d15e95b59e registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.1],SizeBytes:94471559,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:50249337,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:68d396900aeaa072c1f27289485fdac29834045a6f3ffe369bf389d830ef572d registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.6],SizeBytes:47910609,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:19582112,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:17611980,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 k8s.gcr.io/e2e-test-images/nginx:1.14-2],SizeBytes:16032814,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:1154361,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6 registry.k8s.io/pause:3.6],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 30 22:54:45.906: INFO: Logging kubelet events for node ip-172-20-56-33.sa-east-1.compute.internal Jan 30 22:54:46.050: INFO: Logging pods the kubelet thinks is on node ip-172-20-56-33.sa-east-1.compute.internal Jan 30 22:54:46.201: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-z46zz started at 2023-01-30 22:43:42 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:46.201: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: false, restart count 0 Jan 30 22:54:46.201: INFO: fail-once-non-local-ksmfx started at 2023-01-30 22:43:53 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:46.201: INFO: Container c ready: false, restart count 0 Jan 30 22:54:46.201: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-k9mvg started at 2023-01-30 22:43:43 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:46.201: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:54:46.201: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-rj6w6 started at 2023-01-30 22:43:43 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:46.201: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:54:46.201: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-hx4t7 started at 2023-01-30 22:43:42 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:46.201: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:54:46.201: INFO: csi-hostpathplugin-0 started at 2023-01-30 22:49:23 +0000 UTC (0+7 container statuses recorded) Jan 30 22:54:46.201: INFO: Container csi-attacher ready: false, restart count 0 Jan 30 22:54:46.201: INFO: Container csi-provisioner ready: false, restart count 0 Jan 30 22:54:46.201: INFO: Container csi-resizer ready: false, restart count 0 Jan 30 22:54:46.201: INFO: Container csi-snapshotter ready: false, restart count 0 Jan 30 22:54:46.201: INFO: Container hostpath ready: false, restart count 0 Jan 30 22:54:46.201: INFO: Container liveness-probe ready: false, restart count 0 Jan 30 22:54:46.201: INFO: Container node-driver-registrar ready: false, restart count 0 Jan 30 22:54:46.201: INFO: csi-hostpathplugin-0 started at 2023-01-30 22:45:43 +0000 UTC (0+7 container statuses recorded) Jan 30 22:54:46.201: INFO: Container csi-attacher ready: false, restart count 0 Jan 30 22:54:46.201: INFO: Container csi-provisioner ready: false, restart count 0 Jan 30 22:54:46.201: INFO: Container csi-resizer ready: false, restart count 0 Jan 30 22:54:46.201: INFO: Container csi-snapshotter ready: false, restart count 0 Jan 30 22:54:46.201: INFO: Container hostpath ready: false, restart count 0 Jan 30 22:54:46.201: INFO: Container liveness-probe ready: false, restart count 0 Jan 30 22:54:46.201: INFO: Container node-driver-registrar ready: false, restart count 0 Jan 30 22:54:46.201: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-kl9wl started at 2023-01-30 22:43:43 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:46.201: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:54:46.201: INFO: hostexec-ip-172-20-56-33.sa-east-1.compute.internal-dgjvl started at 2023-01-30 22:51:01 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:46.201: INFO: Container agnhost-container ready: true, restart count 0 Jan 30 22:54:46.201: INFO: cilium-rrh22 started at 2023-01-30 22:39:07 +0000 UTC (1+1 container statuses recorded) Jan 30 22:54:46.201: INFO: Init container clean-cilium-state ready: true, restart count 0 Jan 30 22:54:46.201: INFO: Container cilium-agent ready: true, restart count 0 Jan 30 22:54:46.201: INFO: fail-once-non-local-7gtm7 started at 2023-01-30 22:44:15 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:46.201: INFO: Container c ready: false, restart count 0 Jan 30 22:54:46.201: INFO: local-injector started at 2023-01-30 22:50:01 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:46.201: INFO: Container local-injector ready: false, restart count 0 Jan 30 22:54:46.201: INFO: fail-once-non-local-nvn9l started at 2023-01-30 22:43:53 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:46.201: INFO: Container c ready: false, restart count 0 Jan 30 22:54:46.201: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-cxdvn started at 2023-01-30 22:43:42 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:46.201: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:54:46.201: INFO: ebs-csi-node-846kf started at 2023-01-30 22:39:07 +0000 UTC (0+3 container statuses recorded) Jan 30 22:54:46.201: INFO: Container ebs-plugin ready: true, restart count 0 Jan 30 22:54:46.201: INFO: Container liveness-probe ready: true, restart count 0 Jan 30 22:54:46.201: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 30 22:54:46.201: INFO: coredns-867df8f45c-txv2h started at 2023-01-30 22:39:26 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:46.201: INFO: Container coredns ready: true, restart count 0 Jan 30 22:54:46.201: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-ctp24 started at 2023-01-30 22:43:42 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:46.201: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:54:46.201: INFO: exec-volume-test-preprovisionedpv-4xz9 started at 2023-01-30 22:51:15 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:46.201: INFO: Container exec-container-preprovisionedpv-4xz9 ready: false, restart count 0 Jan 30 22:54:46.201: INFO: hostexec-ip-172-20-56-33.sa-east-1.compute.internal-tnngb started at 2023-01-30 22:49:40 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:46.201: INFO: Container agnhost-container ready: true, restart count 0 Jan 30 22:54:46.201: INFO: coredns-autoscaler-557ccb4c66-vs6br started at 2023-01-30 22:39:26 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:46.201: INFO: Container autoscaler ready: true, restart count 0 Jan 30 22:54:46.201: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-rqjpx started at 2023-01-30 22:43:42 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:46.201: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:54:46.201: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-fmwp2 started at 2023-01-30 22:43:42 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:46.201: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:54:46.201: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-g95mq started at 2023-01-30 22:43:42 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:46.201: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:54:47.068: INFO: Latency metrics for node ip-172-20-56-33.sa-east-1.compute.internal Jan 30 22:54:47.068: INFO: Logging node info for node ip-172-20-63-44.sa-east-1.compute.internal Jan 30 22:54:47.210: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-63-44.sa-east-1.compute.internal f7fcefff-e13d-4383-8796-cdc02ac9be26 10331 0 2023-01-30 22:37:10 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:c5.large beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:sa-east-1 failure-domain.beta.kubernetes.io/zone:sa-east-1a kops.k8s.io/kops-controller-pki: kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-63-44.sa-east-1.compute.internal kubernetes.io/os:linux kubernetes.io/role:master node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers: node.kubernetes.io/instance-type:c5.large topology.ebs.csi.aws.com/zone:sa-east-1a topology.kubernetes.io/region:sa-east-1 topology.kubernetes.io/zone:sa-east-1a] map[csi.volume.kubernetes.io/nodeid:{"ebs.csi.aws.com":"i-020b2e4354e67a776"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-30 22:37:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {protokube Update v1 2023-01-30 22:37:31 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kops.k8s.io/kops-controller-pki":{},"f:node-role.kubernetes.io/control-plane":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}} } {kops-controller Update v1 2023-01-30 22:37:33 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/master":{}}}} } {kube-controller-manager Update v1 2023-01-30 22:37:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.0.0/24\"":{}},"f:taints":{}}} } {kubelet Update v1 2023-01-30 22:38:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.0.0/24,DoNotUseExternalID:,ProviderID:aws:///sa-east-1a/i-020b2e4354e67a776,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[100.96.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{49753808896 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3862913024 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{44778427933 0} {<nil>} 44778427933 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3758055424 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-30 22:54:43 +0000 UTC,LastTransitionTime:2023-01-30 22:37:10 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-30 22:54:43 +0000 UTC,LastTransitionTime:2023-01-30 22:37:10 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-30 22:54:43 +0000 UTC,LastTransitionTime:2023-01-30 22:37:10 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-30 22:54:43 +0000 UTC,LastTransitionTime:2023-01-30 22:37:51 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.63.44,},NodeAddress{Type:ExternalIP,Address:18.230.69.200,},NodeAddress{Type:Hostname,Address:ip-172-20-63-44.sa-east-1.compute.internal,},NodeAddress{Type:InternalDNS,Address:ip-172-20-63-44.sa-east-1.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-18-230-69-200.sa-east-1.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec2866a629da92bef6391329a4d3d367,SystemUUID:ec2866a6-29da-92be-f639-1329a4d3d367,BootID:a943fe41-4bc4-4772-98e1-0ba5a25bcb7f,KernelVersion:5.15.0-1028-aws,OSImage:Ubuntu 22.04.1 LTS,ContainerRuntimeVersion:docker://20.10.17,KubeletVersion:v1.23.16,KubeProxyVersion:v1.23.16,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/etcdadm/etcd-manager@sha256:66a453db625abb268f4b3bbefc5a34a171d81e6e8796cecca54cfd71775c77c4 registry.k8s.io/etcdadm/etcd-manager:v3.0.20221209],SizeBytes:662209237,},ContainerImage{Names:[quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5 quay.io/cilium/cilium:v1.12.5],SizeBytes:456569528,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver-amd64:v1.23.16 registry.k8s.io/kube-apiserver-amd64:v1.23.16],SizeBytes:129999849,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager-amd64:v1.23.16 registry.k8s.io/kube-controller-manager-amd64:v1.23.16],SizeBytes:119940367,},ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.23.16 registry.k8s.io/kube-proxy-amd64:v1.23.16],SizeBytes:110832791,},ContainerImage{Names:[quay.io/cilium/operator@sha256:a6d24a006a6b92967ac90786b49bc1ac26e5477cf028cd1186efcfc2466484db quay.io/cilium/operator:v1.12.5],SizeBytes:110016956,},ContainerImage{Names:[registry.k8s.io/kops/kops-controller:1.26.0-beta.2],SizeBytes:106139107,},ContainerImage{Names:[registry.k8s.io/kops/dns-controller:1.26.0-beta.2],SizeBytes:102637092,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:f0c5de192d832e7c1daa6580d4a62e8fa6fc8eabc0917ae4cb7ed4d15e95b59e registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.1],SizeBytes:94471559,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:122bfb8c1edabb3c0edd63f06523e6940d958d19b3957dc7b1d6f81e9f1f6119 registry.k8s.io/sig-storage/csi-provisioner:v3.1.0],SizeBytes:57736070,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:9ebbf9f023e7b41ccee3d52afe39a89e3ddacdbb69269d583abfc25847cfd9e4 registry.k8s.io/sig-storage/csi-resizer:v1.4.0],SizeBytes:55464778,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:8b9c313c05f54fb04f8d430896f5f5904b6cb157df261501b29adc04d2b2dc7b registry.k8s.io/sig-storage/csi-attacher:v3.4.0],SizeBytes:54848595,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler-amd64:v1.23.16 registry.k8s.io/kube-scheduler-amd64:v1.23.16],SizeBytes:51852546,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:19582112,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:17611980,},ContainerImage{Names:[registry.k8s.io/kops/kube-apiserver-healthcheck:1.26.0-beta.2],SizeBytes:8786911,},ContainerImage{Names:[registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause:3.6],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 30 22:54:47.210: INFO: Logging kubelet events for node ip-172-20-63-44.sa-east-1.compute.internal Jan 30 22:54:47.355: INFO: Logging pods the kubelet thinks is on node ip-172-20-63-44.sa-east-1.compute.internal Jan 30 22:54:47.504: INFO: kube-scheduler-ip-172-20-63-44.sa-east-1.compute.internal started at 2023-01-30 22:36:20 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:47.504: INFO: Container kube-scheduler ready: true, restart count 0 Jan 30 22:54:47.504: INFO: ebs-csi-node-crhx2 started at 2023-01-30 22:37:30 +0000 UTC (0+3 container statuses recorded) Jan 30 22:54:47.504: INFO: Container ebs-plugin ready: true, restart count 0 Jan 30 22:54:47.504: INFO: Container liveness-probe ready: true, restart count 0 Jan 30 22:54:47.504: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 30 22:54:47.504: INFO: cilium-bg2hw started at 2023-01-30 22:37:30 +0000 UTC (1+1 container statuses recorded) Jan 30 22:54:47.504: INFO: Init container clean-cilium-state ready: true, restart count 0 Jan 30 22:54:47.504: INFO: Container cilium-agent ready: true, restart count 0 Jan 30 22:54:47.504: INFO: etcd-manager-events-ip-172-20-63-44.sa-east-1.compute.internal started at 2023-01-30 22:36:20 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:47.504: INFO: Container etcd-manager ready: true, restart count 0 Jan 30 22:54:47.504: INFO: etcd-manager-main-ip-172-20-63-44.sa-east-1.compute.internal started at 2023-01-30 22:36:20 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:47.504: INFO: Container etcd-manager ready: true, restart count 0 Jan 30 22:54:47.504: INFO: kops-controller-mrlzz started at 2023-01-30 22:37:31 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:47.504: INFO: Container kops-controller ready: true, restart count 0 Jan 30 22:54:47.504: INFO: dns-controller-58d7bbb845-vwkl6 started at 2023-01-30 22:37:32 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:47.504: INFO: Container dns-controller ready: true, restart count 0 Jan 30 22:54:47.504: INFO: cilium-operator-c7bfc9f44-bhw9j started at 2023-01-30 22:37:32 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:47.504: INFO: Container cilium-operator ready: true, restart count 0 Jan 30 22:54:47.504: INFO: ebs-csi-controller-6dbc9bb9b4-zt6h6 started at 2023-01-30 22:37:32 +0000 UTC (0+5 container statuses recorded) Jan 30 22:54:47.504: INFO: Container csi-attacher ready: true, restart count 0 Jan 30 22:54:47.504: INFO: Container csi-provisioner ready: true, restart count 0 Jan 30 22:54:47.504: INFO: Container csi-resizer ready: true, restart count 0 Jan 30 22:54:47.504: INFO: Container ebs-plugin ready: true, restart count 0 Jan 30 22:54:47.504: INFO: Container liveness-probe ready: true, restart count 0 Jan 30 22:54:47.504: INFO: kube-apiserver-ip-172-20-63-44.sa-east-1.compute.internal started at 2023-01-30 22:36:20 +0000 UTC (0+2 container statuses recorded) Jan 30 22:54:47.504: INFO: Container healthcheck ready: true, restart count 0 Jan 30 22:54:47.504: INFO: Container kube-apiserver ready: true, restart count 1 Jan 30 22:54:47.504: INFO: kube-controller-manager-ip-172-20-63-44.sa-east-1.compute.internal started at 2023-01-30 22:36:20 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:47.504: INFO: Container kube-controller-manager ready: true, restart count 2 Jan 30 22:54:47.956: INFO: Latency metrics for node ip-172-20-63-44.sa-east-1.compute.internal Jan 30 22:54:47.957: INFO: Logging node info for node ip-172-20-63-7.sa-east-1.compute.internal Jan 30 22:54:48.099: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-63-7.sa-east-1.compute.internal 8ee09ce8-ad2c-4347-b6b0-a38439fe8b38 7860 0 2023-01-30 22:39:06 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:sa-east-1 failure-domain.beta.kubernetes.io/zone:sa-east-1a kubelet_cleanup:true kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-63-7.sa-east-1.compute.internal kubernetes.io/os:linux kubernetes.io/role:node node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:sa-east-1a topology.hostpath.csi/node:ip-172-20-63-7.sa-east-1.compute.internal topology.kubernetes.io/region:sa-east-1 topology.kubernetes.io/zone:sa-east-1a] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-ephemeral-5132":"ip-172-20-63-7.sa-east-1.compute.internal","ebs.csi.aws.com":"i-02d1af952f8cb9055"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kops-controller Update v1 2023-01-30 22:39:06 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/node":{}}}} } {kube-controller-manager Update v1 2023-01-30 22:39:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.2.0/24\"":{}}}} } {kubelet Update v1 2023-01-30 22:39:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {e2e.test Update v1 2023-01-30 22:43:42 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubelet_cleanup":{}}}} } {kube-controller-manager Update v1 2023-01-30 22:44:35 +0000 UTC FieldsV1 {"f:status":{"f:volumesAttached":{}}} status} {kubelet Update v1 2023-01-30 22:45:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.2.0/24,DoNotUseExternalID:,ProviderID:aws:///sa-east-1a/i-02d1af952f8cb9055,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{49753808896 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4051644416 0} {<nil>} 3956684Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{44778427933 0} {<nil>} 44778427933 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3946786816 0} {<nil>} 3854284Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-30 22:49:42 +0000 UTC,LastTransitionTime:2023-01-30 22:39:06 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-30 22:49:42 +0000 UTC,LastTransitionTime:2023-01-30 22:39:06 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-30 22:49:42 +0000 UTC,LastTransitionTime:2023-01-30 22:39:06 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-30 22:49:42 +0000 UTC,LastTransitionTime:2023-01-30 22:39:27 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.63.7,},NodeAddress{Type:ExternalIP,Address:52.67.57.31,},NodeAddress{Type:Hostname,Address:ip-172-20-63-7.sa-east-1.compute.internal,},NodeAddress{Type:InternalDNS,Address:ip-172-20-63-7.sa-east-1.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-52-67-57-31.sa-east-1.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec292dc0bb9ad655da1bd5cf4f054caa,SystemUUID:ec292dc0-bb9a-d655-da1b-d5cf4f054caa,BootID:3aa9a5e0-6628-460f-859b-942e6b19dc1d,KernelVersion:5.15.0-1028-aws,OSImage:Ubuntu 22.04.1 LTS,ContainerRuntimeVersion:docker://20.10.17,KubeletVersion:v1.23.16,KubeProxyVersion:v1.23.16,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5 quay.io/cilium/cilium:v1.12.5],SizeBytes:456569528,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e k8s.gcr.io/e2e-test-images/agnhost:2.39],SizeBytes:126872991,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.23.16 registry.k8s.io/kube-proxy-amd64:v1.23.16],SizeBytes:110832791,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:f0c5de192d832e7c1daa6580d4a62e8fa6fc8eabc0917ae4cb7ed4d15e95b59e registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.1],SizeBytes:94471559,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0],SizeBytes:56382329,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:4e74c0492bceddc598de1c90cc5bc14dcda94cb49fa9c5bad9d117c4834b5e08 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.1],SizeBytes:56379269,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:36c31f7e1f433c9634d24f876353e8646246d81a03c4e351202c2644daff1620 k8s.gcr.io/sig-storage/csi-resizer:v1.2.0],SizeBytes:53956308,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-snapshotter@sha256:ed98431376c9e944e19a465fe8ea944806714dd95416a0821096c78d66b579bd k8s.gcr.io/sig-storage/csi-snapshotter:v4.1.1],SizeBytes:53531990,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:c5be65d6679efabb969d9b019300d187437ae876f992c40911fd2892bbef3b36 k8s.gcr.io/sig-storage/csi-attacher:v3.2.0],SizeBytes:53500826,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:60ab9b3e6a030d3038c87c0d6bca2930f58d1d72823e6a4af09767dc83b696a2 k8s.gcr.io/sig-storage/csi-attacher:v3.2.1],SizeBytes:53499570,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:7a5ba58a44e0d749e0767e4e37315bcf6a61f33ce3185c1991848af4db0fb70a k8s.gcr.io/sig-storage/csi-resizer:v1.1.0],SizeBytes:49176472,},ContainerImage{Names:[k8s.gcr.io/sig-storage/hostpathplugin@sha256:232fe80174d60d520d36043103853a1d7ab4b7f3782cf43e45034f04ccda58ce k8s.gcr.io/sig-storage/hostpathplugin:v1.7.1],SizeBytes:29977921,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:19662887,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:19582112,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:2dee3fe5fe861bb66c3a4ac51114f3447a4cd35870e0f2e2b558c7a400d89589 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0],SizeBytes:18699436,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:17680993,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:17611980,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:529be2c9770add0cdd0c989115222ea9fc1be430c11095eb9f6dafcf98a36e2b k8s.gcr.io/sig-storage/livenessprobe:v2.4.0],SizeBytes:17175488,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:1154361,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6 registry.k8s.io/pause:3.6],SizeBytes:682696,},},VolumesInUse:[kubernetes.io/csi/ebs.csi.aws.com^vol-0f9eeed3999f6acd3],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/ebs.csi.aws.com^vol-0f9eeed3999f6acd3,DevicePath:,},},Config:nil,},} Jan 30 22:54:48.099: INFO: Logging kubelet events for node ip-172-20-63-7.sa-east-1.compute.internal Jan 30 22:54:48.244: INFO: Logging pods the kubelet thinks is on node ip-172-20-63-7.sa-east-1.compute.internal Jan 30 22:54:48.393: INFO: pod-d8cff309-3d6a-4ce5-9ac9-b57de7155461 started at 2023-01-30 22:45:48 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:48.393: INFO: Container write-pod ready: true, restart count 0 Jan 30 22:54:48.393: INFO: csi-mockplugin-attacher-0 started at 2023-01-30 22:49:11 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:48.393: INFO: Container csi-attacher ready: true, restart count 0 Jan 30 22:54:48.393: INFO: inline-volume-tester-62nrc started at 2023-01-30 22:43:42 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:48.393: INFO: Container csi-volume-tester ready: false, restart count 0 Jan 30 22:54:48.393: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-fq2r6 started at 2023-01-30 22:43:42 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:48.393: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:54:48.393: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-9t9kb started at 2023-01-30 22:43:42 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:48.393: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:54:48.393: INFO: cilium-qtf8x started at 2023-01-30 22:39:07 +0000 UTC (1+1 container statuses recorded) Jan 30 22:54:48.393: INFO: Init container clean-cilium-state ready: true, restart count 0 Jan 30 22:54:48.393: INFO: Container cilium-agent ready: true, restart count 0 Jan 30 22:54:48.393: INFO: hostexec-ip-172-20-63-7.sa-east-1.compute.internal-grfw9 started at 2023-01-30 22:45:37 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:48.393: INFO: Container agnhost-container ready: true, restart count 0 Jan 30 22:54:48.393: INFO: httpd started at 2023-01-30 22:52:48 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:48.393: INFO: Container httpd ready: false, restart count 0 Jan 30 22:54:48.393: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-nft6k started at 2023-01-30 22:43:42 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:48.394: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:54:48.394: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-zrntw started at 2023-01-30 22:43:42 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:48.394: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:54:48.394: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-fp6pt started at 2023-01-30 22:43:42 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:48.394: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:54:48.394: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-qbtpc started at 2023-01-30 22:43:43 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:48.394: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:54:48.394: INFO: csi-mockplugin-0 started at 2023-01-30 22:49:11 +0000 UTC (0+3 container statuses recorded) Jan 30 22:54:48.394: INFO: Container csi-provisioner ready: false, restart count 0 Jan 30 22:54:48.394: INFO: Container driver-registrar ready: false, restart count 0 Jan 30 22:54:48.394: INFO: Container mock ready: false, restart count 0 Jan 30 22:54:48.394: INFO: termination-message-container273f6b47-08fc-4262-a3ab-3617fa0ad4c2 started at 2023-01-30 22:50:49 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:48.394: INFO: Container termination-message-container ready: false, restart count 0 Jan 30 22:54:48.394: INFO: pod-9361d956-3a9e-45fa-92dc-ac8884faccaa started at 2023-01-30 22:50:26 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:48.394: INFO: Container write-pod ready: false, restart count 0 Jan 30 22:54:48.394: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-v8ln8 started at 2023-01-30 22:43:43 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:48.394: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:54:48.394: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-cnpbk started at 2023-01-30 22:43:42 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:48.394: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: false, restart count 0 Jan 30 22:54:48.394: INFO: ebs-csi-node-wc6gx started at 2023-01-30 22:39:07 +0000 UTC (0+3 container statuses recorded) Jan 30 22:54:48.394: INFO: Container ebs-plugin ready: true, restart count 0 Jan 30 22:54:48.394: INFO: Container liveness-probe ready: true, restart count 0 Jan 30 22:54:48.394: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 30 22:54:48.394: INFO: rs-4k8s4 started at 2023-01-30 22:49:21 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:48.394: INFO: Container donothing ready: false, restart count 0 Jan 30 22:54:48.394: INFO: pod-70b62feb-0f03-4bb7-97a6-9bed39f38a55 started at 2023-01-30 22:50:29 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:48.394: INFO: Container write-pod ready: false, restart count 0 Jan 30 22:54:48.394: INFO: csi-hostpathplugin-0 started at 2023-01-30 22:43:41 +0000 UTC (0+7 container statuses recorded) Jan 30 22:54:48.394: INFO: Container csi-attacher ready: true, restart count 0 Jan 30 22:54:48.394: INFO: Container csi-provisioner ready: true, restart count 0 Jan 30 22:54:48.394: INFO: Container csi-resizer ready: true, restart count 0 Jan 30 22:54:48.394: INFO: Container csi-snapshotter ready: true, restart count 0 Jan 30 22:54:48.394: INFO: Container hostpath ready: true, restart count 0 Jan 30 22:54:48.394: INFO: Container liveness-probe ready: true, restart count 0 Jan 30 22:54:48.394: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 30 22:54:48.394: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-56dt8 started at 2023-01-30 22:43:42 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:48.394: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:54:48.394: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-2m4f6 started at 2023-01-30 22:43:43 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:48.394: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:54:48.881: INFO: Latency metrics for node ip-172-20-63-7.sa-east-1.compute.internal Jan 30 22:54:48.881: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "statefulset-9857" for this suite.
Filter through log files
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-network\]\sNetworking\sGranular\sChecks\:\sServices\sshould\sfunction\sfor\snode\-Service\:\sudp$'
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:204 Jan 30 22:48:49.379: Unexpected error: <*errors.errorString | 0xc00025e240>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:859from junit_09.xml
[BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 30 22:43:47.057: INFO: >>> kubeConfig: /root/.kube/config �[1mSTEP�[0m: Building a namespace api object, basename nettest �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should function for node-Service: udp /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:204 �[1mSTEP�[0m: Performing setup for networking test in namespace nettest-421 �[1mSTEP�[0m: creating a selector �[1mSTEP�[0m: Creating the service pods in kubernetes Jan 30 22:43:48.075: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Jan 30 22:43:49.091: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:43:51.240: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:43:53.236: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:43:55.240: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:43:57.234: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:43:59.236: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:44:01.236: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:44:03.235: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:44:05.236: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:44:07.234: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:44:09.234: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:44:11.235: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:44:13.235: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:44:15.239: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:44:17.235: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:44:19.241: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:44:21.234: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:44:23.240: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:44:25.235: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:44:27.234: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:44:29.236: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:44:31.235: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:44:33.236: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:44:35.234: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:44:37.235: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:44:39.235: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:44:41.235: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:44:43.234: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:44:45.235: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:44:47.236: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:44:49.235: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:44:51.235: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:44:53.235: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:44:55.236: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:44:57.234: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:44:59.235: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:45:01.235: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:45:03.235: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:45:05.235: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:45:07.235: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:45:09.235: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:45:11.236: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:45:13.234: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:45:15.235: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:45:17.235: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:45:19.236: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:45:21.236: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:45:23.234: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:45:25.236: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:45:27.234: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:45:29.236: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:45:31.236: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:45:33.235: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:45:35.236: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:45:37.235: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:45:39.236: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:45:41.235: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:45:43.235: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:45:45.236: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:45:47.235: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:45:49.236: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:45:51.236: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:45:53.235: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:45:55.236: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:45:57.235: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:45:59.235: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:46:01.235: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:46:03.235: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:46:05.236: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:46:07.234: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:46:09.235: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:46:11.234: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:46:13.235: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:46:15.236: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:46:17.236: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:46:19.236: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:46:21.235: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:46:23.234: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:46:25.236: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:46:27.235: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:46:29.236: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:46:31.236: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:46:33.234: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:46:35.235: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:46:37.235: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:46:39.236: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:46:41.236: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:46:43.235: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:46:45.236: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:46:47.234: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:46:49.235: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:46:51.235: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:46:53.235: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:46:55.236: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:46:57.234: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:46:59.235: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:47:01.236: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:47:03.235: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:47:05.235: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:47:07.236: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:47:09.235: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:47:11.236: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:47:13.235: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:47:15.235: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:47:17.235: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:47:19.235: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:47:21.236: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:47:23.236: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:47:25.236: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:47:27.235: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:47:29.235: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:47:31.236: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:47:33.234: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:47:35.235: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:47:37.235: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:47:39.234: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:47:41.236: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:47:43.236: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:47:45.235: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:47:47.235: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:47:49.236: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:47:51.234: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:47:53.235: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:47:55.236: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:47:57.235: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:47:59.236: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:48:01.236: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:48:03.235: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:48:05.235: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:48:07.235: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:48:09.235: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:48:11.235: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:48:13.235: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:48:15.235: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:48:17.235: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:48:19.236: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:48:21.236: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:48:23.235: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:48:25.235: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:48:27.235: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:48:29.234: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:48:31.236: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:48:33.234: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:48:35.234: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:48:37.235: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:48:39.235: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:48:41.236: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:48:43.235: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:48:45.236: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:48:47.235: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:48:49.235: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:48:49.378: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:48:49.379: FAIL: Unexpected error: <*errors.errorString | 0xc00025e240>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred Full Stack Trace k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).createNetProxyPods(0xc00024e8c0, {0x7055fcb, 0x9}, 0xc003be1b00) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:859 +0x1d0 k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setupCore(0xc00024e8c0, 0x203000?) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:761 +0x55 k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setup(0xc00024e8c0, 0x34?) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:776 +0x3b k8s.io/kubernetes/test/e2e/framework/network.NewNetworkingTestConfig(0xc000f50160, {0xc001253178, 0x1, 0x0?}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:129 +0x125 k8s.io/kubernetes/test/e2e/network.glob..func20.6.5() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:205 +0x51 k8s.io/kubernetes/test/e2e.RunE2ETests(0x24eec17?) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:133 +0x6b7 k8s.io/kubernetes/test/e2e.TestE2E(0x24602d9?) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc000871520, 0x72ecb90) /usr/local/go/src/testing/testing.go:1446 +0x10b created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1493 +0x35f [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 �[1mSTEP�[0m: Collecting events from namespace "nettest-421". �[1mSTEP�[0m: Found 48 events. Jan 30 22:48:49.524: INFO: At 2023-01-30 22:43:48 +0000 UTC - event for netserver-0: {default-scheduler } Scheduled: Successfully assigned nettest-421/netserver-0 to ip-172-20-37-244.sa-east-1.compute.internal Jan 30 22:48:49.524: INFO: At 2023-01-30 22:43:48 +0000 UTC - event for netserver-1: {default-scheduler } Scheduled: Successfully assigned nettest-421/netserver-1 to ip-172-20-46-143.sa-east-1.compute.internal Jan 30 22:48:49.524: INFO: At 2023-01-30 22:43:48 +0000 UTC - event for netserver-2: {default-scheduler } Scheduled: Successfully assigned nettest-421/netserver-2 to ip-172-20-56-33.sa-east-1.compute.internal Jan 30 22:48:49.524: INFO: At 2023-01-30 22:43:48 +0000 UTC - event for netserver-3: {default-scheduler } Scheduled: Successfully assigned nettest-421/netserver-3 to ip-172-20-63-7.sa-east-1.compute.internal Jan 30 22:48:49.524: INFO: At 2023-01-30 22:43:50 +0000 UTC - event for netserver-0: {kubelet ip-172-20-37-244.sa-east-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "5665c4e083876c40b53a5eaa1d8730bcd1ec614860841ec2a308e1d09e0ebae4" network for pod "netserver-0": networkPlugin cni failed to set up pod "netserver-0_nettest-421" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 30 22:48:49.524: INFO: At 2023-01-30 22:43:50 +0000 UTC - event for netserver-2: {kubelet ip-172-20-56-33.sa-east-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "36e9031c1ab76cdc3279f01983d47f0560e250f864d2a878aca98928346ccbc3" network for pod "netserver-2": networkPlugin cni failed to set up pod "netserver-2_nettest-421" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 30 22:48:49.524: INFO: At 2023-01-30 22:43:51 +0000 UTC - event for netserver-0: {kubelet ip-172-20-37-244.sa-east-1.compute.internal} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 22:48:49.524: INFO: At 2023-01-30 22:43:51 +0000 UTC - event for netserver-1: {kubelet ip-172-20-46-143.sa-east-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "758f9a7f8beb8c344297e9610f506845360cd323e52c41556514f0b8a57bd895" network for pod "netserver-1": networkPlugin cni failed to set up pod "netserver-1_nettest-421" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 30 22:48:49.524: INFO: At 2023-01-30 22:43:51 +0000 UTC - event for netserver-2: {kubelet ip-172-20-56-33.sa-east-1.compute.internal} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 22:48:49.524: INFO: At 2023-01-30 22:43:52 +0000 UTC - event for netserver-0: {kubelet ip-172-20-37-244.sa-east-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "82403693692ca1cdca287f9c88e0e01eef54d0365398363499cbedf89140bb85" network for pod "netserver-0": networkPlugin cni failed to set up pod "netserver-0_nettest-421" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 30 22:48:49.524: INFO: At 2023-01-30 22:43:52 +0000 UTC - event for netserver-1: {kubelet ip-172-20-46-143.sa-east-1.compute.internal} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 22:48:49.524: INFO: At 2023-01-30 22:43:52 +0000 UTC - event for netserver-2: {kubelet ip-172-20-56-33.sa-east-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "de822fff06d7cf1d43eab9e8e13914af6a77636357682aed6f89c9ecda325735" network for pod "netserver-2": networkPlugin cni failed to set up pod "netserver-2_nettest-421" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 30 22:48:49.524: INFO: At 2023-01-30 22:43:52 +0000 UTC - event for netserver-3: {kubelet ip-172-20-63-7.sa-east-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "f5024fec4e45d0f33fa5e1233c29d3469d468fe8d78f91942503618b885fa816" network for pod "netserver-3": networkPlugin cni failed to set up pod "netserver-3_nettest-421" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 30 22:48:49.524: INFO: At 2023-01-30 22:43:53 +0000 UTC - event for netserver-1: {kubelet ip-172-20-46-143.sa-east-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "06c32d0349555cdfe8dcd3ea8faa8a28c27b4e56b556021831db7cded332e77c" network for pod "netserver-1": networkPlugin cni failed to set up pod "netserver-1_nettest-421" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 30 22:48:49.524: INFO: At 2023-01-30 22:43:53 +0000 UTC - event for netserver-3: {kubelet ip-172-20-63-7.sa-east-1.compute.internal} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 22:48:49.524: INFO: At 2023-01-30 22:43:54 +0000 UTC - event for netserver-0: {kubelet ip-172-20-37-244.sa-east-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "de21ec22a320d26ea5531a92c516511aa15774fc8a5c877bd0e260bc8c97dd3c" network for pod "netserver-0": networkPlugin cni failed to set up pod "netserver-0_nettest-421" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 30 22:48:49.524: INFO: At 2023-01-30 22:43:55 +0000 UTC - event for netserver-3: {kubelet ip-172-20-63-7.sa-east-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "48cdd3e7624fd0595750ec299cb7e1a645d65c9b204e11f2ad049dbedfe50fc2" network for pod "netserver-3": networkPlugin cni failed to set up pod "netserver-3_nettest-421" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 30 22:48:49.524: INFO: At 2023-01-30 22:43:56 +0000 UTC - event for netserver-0: {kubelet ip-172-20-37-244.sa-east-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "ef1840c42d2eb99104cdc0d0d4e0de6227d2d07f27b4191f50f8914557b6ee40" network for pod "netserver-0": networkPlugin cni failed to set up pod "netserver-0_nettest-421" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 30 22:48:49.524: INFO: At 2023-01-30 22:43:56 +0000 UTC - event for netserver-2: {kubelet ip-172-20-56-33.sa-east-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "ea6845fac3d447912421c11ac883d6d616cdfcb6a7409daa3b7e46cabe9fe4cc" network for pod "netserver-2": networkPlugin cni failed to set up pod "netserver-2_nettest-421" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 30 22:48:49.524: INFO: At 2023-01-30 22:43:57 +0000 UTC - event for netserver-1: {kubelet ip-172-20-46-143.sa-east-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "9e9a6a776bd373354d54e7c2702459d8e54c3e6f7363bb2d91168ecc8a7f30d4" network for pod "netserver-1": networkPlugin cni failed to set up pod "netserver-1_nettest-421" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 30 22:48:49.524: INFO: At 2023-01-30 22:43:58 +0000 UTC - event for netserver-0: {kubelet ip-172-20-37-244.sa-east-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "20edf817903121926a69eb6384c9f521188a01dbed9929919dbabfdfefd8286b" network for pod "netserver-0": networkPlugin cni failed to set up pod "netserver-0_nettest-421" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 30 22:48:49.524: INFO: At 2023-01-30 22:43:58 +0000 UTC - event for netserver-2: {kubelet ip-172-20-56-33.sa-east-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "daf05a6ee1a9dff75f278efbaaae277224939d04885ecd935b63591f1138033f" network for pod "netserver-2": networkPlugin cni failed to set up pod "netserver-2_nettest-421" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 30 22:48:49.524: INFO: At 2023-01-30 22:43:58 +0000 UTC - event for netserver-3: {kubelet ip-172-20-63-7.sa-east-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "d64bc5d0b48262c76596116db03e025fb4388914d546770fef216dd8a12aaf02" network for pod "netserver-3": networkPlugin cni failed to set up pod "netserver-3_nettest-421" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 30 22:48:49.524: INFO: At 2023-01-30 22:43:59 +0000 UTC - event for netserver-1: {kubelet ip-172-20-46-143.sa-east-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "0f7b6113a704a9f8104ad9da579eab82a0c9de59ee5a207ded8eaa7657c0c4d7" network for pod "netserver-1": networkPlugin cni failed to set up pod "netserver-1_nettest-421" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 30 22:48:49.524: INFO: At 2023-01-30 22:44:00 +0000 UTC - event for netserver-1: {kubelet ip-172-20-46-143.sa-east-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "c5806f2065a668f3cf7cb5dc6ef8ee9e6d2b333fee6b2bc068637b0dc19193b0" network for pod "netserver-1": networkPlugin cni failed to set up pod "netserver-1_nettest-421" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 30 22:48:49.524: INFO: At 2023-01-30 22:44:01 +0000 UTC - event for netserver-0: {kubelet ip-172-20-37-244.sa-east-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "5694d75091798abebf2e846aea108bbea8aa8c8bca84c5dfe3e23e98f9cce0f7" network for pod "netserver-0": networkPlugin cni failed to set up pod "netserver-0_nettest-421" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 30 22:48:49.524: INFO: At 2023-01-30 22:44:01 +0000 UTC - event for netserver-1: {kubelet ip-172-20-46-143.sa-east-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "727210a7e9dc4c346c09d36abf19b3901a722fd553ae370e2035539534b731c6" network for pod "netserver-1": networkPlugin cni failed to set up pod "netserver-1_nettest-421" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 30 22:48:49.524: INFO: At 2023-01-30 22:44:01 +0000 UTC - event for netserver-2: {kubelet ip-172-20-56-33.sa-east-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "aa74ef788424ec3599895d4ae179ced1d6227c6309b3fab3d1d28ee01ea75165" network for pod "netserver-2": networkPlugin cni failed to set up pod "netserver-2_nettest-421" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 30 22:48:49.524: INFO: At 2023-01-30 22:44:03 +0000 UTC - event for netserver-0: {kubelet ip-172-20-37-244.sa-east-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "94928178a3fbebb958097f4b0e8dd1fa8471d6c7a313743204fac9af806922c1" network for pod "netserver-0": networkPlugin cni failed to set up pod "netserver-0_nettest-421" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 30 22:48:49.524: INFO: At 2023-01-30 22:44:03 +0000 UTC - event for netserver-2: {kubelet ip-172-20-56-33.sa-east-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "e93370c17668d65e5855c1414f32fe7320aef8c85e75e12b5157c11906b2a7ab" network for pod "netserver-2": networkPlugin cni failed to set up pod "netserver-2_nettest-421" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 30 22:48:49.524: INFO: At 2023-01-30 22:44:03 +0000 UTC - event for netserver-3: {kubelet ip-172-20-63-7.sa-east-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "88ef8bf8f26df76329fc2b5fc6df48bbe4f3ec94d132685b7024a714f645b6b7" network for pod "netserver-3": networkPlugin cni failed to set up pod "netserver-3_nettest-421" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 30 22:48:49.524: INFO: At 2023-01-30 22:44:04 +0000 UTC - event for netserver-1: {kubelet ip-172-20-46-143.sa-east-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "1748ff65555708505d77f33605647aa3fed9f06595c819748485f023fd75d8e4" network for pod "netserver-1": networkPlugin cni failed to set up pod "netserver-1_nettest-421" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 30 22:48:49.524: INFO: At 2023-01-30 22:44:05 +0000 UTC - event for netserver-1: {kubelet ip-172-20-46-143.sa-east-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "009b7c50cd010b8113c0b9ea560c75ab1a479bbce5ccd2ca886e36a635219221" network for pod "netserver-1": networkPlugin cni failed to set up pod "netserver-1_nettest-421" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 30 22:48:49.524: INFO: At 2023-01-30 22:44:05 +0000 UTC - event for netserver-2: {kubelet ip-172-20-56-33.sa-east-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "2c45382e27256111b74065f160ba6dbe9ddb6380da232d241bc430ef49d7f87f" network for pod "netserver-2": networkPlugin cni failed to set up pod "netserver-2_nettest-421" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 30 22:48:49.524: INFO: At 2023-01-30 22:44:06 +0000 UTC - event for netserver-0: {kubelet ip-172-20-37-244.sa-east-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "5a9a511c30cf477b0fef9f4debb18c7acc90cf27ae08ebb3b943c8081f1494a8" network for pod "netserver-0": networkPlugin cni failed to set up pod "netserver-0_nettest-421" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 30 22:48:49.524: INFO: At 2023-01-30 22:44:08 +0000 UTC - event for netserver-1: {kubelet ip-172-20-46-143.sa-east-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "36bcdaa31fff514bb25b8f3efdf2eeaa6bfbd79874c206acc432f4fec00fcb30" network for pod "netserver-1": networkPlugin cni failed to set up pod "netserver-1_nettest-421" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 30 22:48:49.524: INFO: At 2023-01-30 22:44:08 +0000 UTC - event for netserver-2: {kubelet ip-172-20-56-33.sa-east-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "934a28c9df019edae480a224b3c91aa377297cb1d4697009b3229a8851f765a5" network for pod "netserver-2": networkPlugin cni failed to set up pod "netserver-2_nettest-421" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 30 22:48:49.524: INFO: At 2023-01-30 22:44:09 +0000 UTC - event for netserver-0: {kubelet ip-172-20-37-244.sa-east-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "c80032eff3ba03e166d2559dc124e47ddc81829d22e1b5d18e338d625cbce942" network for pod "netserver-0": networkPlugin cni failed to set up pod "netserver-0_nettest-421" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 30 22:48:49.524: INFO: At 2023-01-30 22:44:09 +0000 UTC - event for netserver-2: {kubelet ip-172-20-56-33.sa-east-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "2f1aa19e4b86d69728fc4a8022b063b7157bd8c9621b726d3278b2b2eba37217" network for pod "netserver-2": networkPlugin cni failed to set up pod "netserver-2_nettest-421" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 30 22:48:49.524: INFO: At 2023-01-30 22:44:10 +0000 UTC - event for netserver-3: {kubelet ip-172-20-63-7.sa-east-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "82ec03e51a85da1860c2cb2a704591ef0dace6e6d0baf687ccfe6da2fae51f67" network for pod "netserver-3": networkPlugin cni failed to set up pod "netserver-3_nettest-421" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 30 22:48:49.524: INFO: At 2023-01-30 22:44:12 +0000 UTC - event for netserver-0: {kubelet ip-172-20-37-244.sa-east-1.compute.internal} FailedCreatePodSandBox: (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "d4050ff8c4ffe3fc8d3ae92f7e32092bbf26853f370b3f51be11a3f2025bbdca" network for pod "netserver-0": networkPlugin cni failed to set up pod "netserver-0_nettest-421" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 30 22:48:49.524: INFO: At 2023-01-30 22:44:15 +0000 UTC - event for netserver-3: {kubelet ip-172-20-63-7.sa-east-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "077da98513ec70c66c395a2d8981d977b033608fcbdafe36b061cac0179cfea3" network for pod "netserver-3": networkPlugin cni failed to set up pod "netserver-3_nettest-421" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 30 22:48:49.524: INFO: At 2023-01-30 22:44:18 +0000 UTC - event for netserver-3: {kubelet ip-172-20-63-7.sa-east-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "260daa16c822d0d7a9d964d98e1820cac65705049c911815eb314ce6ace4cab4" network for pod "netserver-3": networkPlugin cni failed to set up pod "netserver-3_nettest-421" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 30 22:48:49.524: INFO: At 2023-01-30 22:44:19 +0000 UTC - event for netserver-1: {kubelet ip-172-20-46-143.sa-east-1.compute.internal} FailedCreatePodSandBox: (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "67d773e966fb347a64a479fb4a994ba406f26d8b4d86c256f940be74acf2801b" network for pod "netserver-1": networkPlugin cni failed to set up pod "netserver-1_nettest-421" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 30 22:48:49.524: INFO: At 2023-01-30 22:44:23 +0000 UTC - event for netserver-3: {kubelet ip-172-20-63-7.sa-east-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "b5305ac19938162a20c5cc6ade0e56d414c69ca62f8d6176e1460f7c3a5d17f3" network for pod "netserver-3": networkPlugin cni failed to set up pod "netserver-3_nettest-421" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 30 22:48:49.524: INFO: At 2023-01-30 22:44:24 +0000 UTC - event for netserver-2: {kubelet ip-172-20-56-33.sa-east-1.compute.internal} FailedCreatePodSandBox: (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "334c651f9210425d8d3124ebad7b77b940e0218fbed893d111c93e551a929f60" network for pod "netserver-2": networkPlugin cni failed to set up pod "netserver-2_nettest-421" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 30 22:48:49.524: INFO: At 2023-01-30 22:44:27 +0000 UTC - event for netserver-3: {kubelet ip-172-20-63-7.sa-east-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "0ea558b1c8bc6db111d9e3e4b0d46459c5300e6d9334e9bf71940b77685e992e" network for pod "netserver-3": networkPlugin cni failed to set up pod "netserver-3_nettest-421" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 30 22:48:49.524: INFO: At 2023-01-30 22:44:52 +0000 UTC - event for netserver-3: {kubelet ip-172-20-63-7.sa-east-1.compute.internal} FailedCreatePodSandBox: (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "7dc5d2f0ec4c64b095cc059050258916d7fb710d4ec9381e0cda897a47b36ac7" network for pod "netserver-3": networkPlugin cni failed to set up pod "netserver-3_nettest-421" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 30 22:48:49.667: INFO: POD NODE PHASE GRACE CONDITIONS Jan 30 22:48:49.667: INFO: netserver-0 ip-172-20-37-244.sa-east-1.compute.internal Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 22:43:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 22:43:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 22:43:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 22:43:48 +0000 UTC }] Jan 30 22:48:49.667: INFO: netserver-1 ip-172-20-46-143.sa-east-1.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 22:43:48 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 22:44:50 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 22:44:50 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 22:43:48 +0000 UTC }] Jan 30 22:48:49.667: INFO: netserver-2 ip-172-20-56-33.sa-east-1.compute.internal Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 22:43:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 22:43:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 22:43:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 22:43:48 +0000 UTC }] Jan 30 22:48:49.667: INFO: netserver-3 ip-172-20-63-7.sa-east-1.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 22:43:48 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 22:46:20 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 22:46:20 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 22:43:48 +0000 UTC }] Jan 30 22:48:49.667: INFO: Jan 30 22:48:49.812: INFO: Unable to fetch nettest-421/netserver-0/webserver logs: the server rejected our request for an unknown reason (get pods netserver-0) Jan 30 22:48:50.106: INFO: Unable to fetch nettest-421/netserver-2/webserver logs: the server rejected our request for an unknown reason (get pods netserver-2) Jan 30 22:48:50.398: INFO: Logging node info for node ip-172-20-37-244.sa-east-1.compute.internal Jan 30 22:48:50.541: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-37-244.sa-east-1.compute.internal 1be0c21f-5cd5-49c3-937b-dcb7d30e890a 4912 0 2023-01-30 22:39:08 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:sa-east-1 failure-domain.beta.kubernetes.io/zone:sa-east-1a kubelet_cleanup:true kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-37-244.sa-east-1.compute.internal kubernetes.io/os:linux kubernetes.io/role:node node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:sa-east-1a topology.hostpath.csi/node:ip-172-20-37-244.sa-east-1.compute.internal topology.kubernetes.io/region:sa-east-1 topology.kubernetes.io/zone:sa-east-1a] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-provisioning-1462":"ip-172-20-37-244.sa-east-1.compute.internal","ebs.csi.aws.com":"i-02de6750f6f07da4c"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kops-controller Update v1 2023-01-30 22:39:08 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/node":{}}}} } {kube-controller-manager Update v1 2023-01-30 22:39:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.4.0/24\"":{}}}} } {kubelet Update v1 2023-01-30 22:39:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {e2e.test Update v1 2023-01-30 22:43:41 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubelet_cleanup":{}}}} } {kube-controller-manager Update v1 2023-01-30 22:44:03 +0000 UTC FieldsV1 {"f:status":{"f:volumesAttached":{}}} status} {kubelet Update v1 2023-01-30 22:44:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.4.0/24,DoNotUseExternalID:,ProviderID:aws:///sa-east-1a/i-02de6750f6f07da4c,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{49753808896 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4051644416 0} {<nil>} 3956684Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{44778427933 0} {<nil>} 44778427933 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3946786816 0} {<nil>} 3854284Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-30 22:44:45 +0000 UTC,LastTransitionTime:2023-01-30 22:39:08 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-30 22:44:45 +0000 UTC,LastTransitionTime:2023-01-30 22:39:08 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-30 22:44:45 +0000 UTC,LastTransitionTime:2023-01-30 22:39:08 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-30 22:44:45 +0000 UTC,LastTransitionTime:2023-01-30 22:39:28 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.37.244,},NodeAddress{Type:ExternalIP,Address:54.232.162.137,},NodeAddress{Type:Hostname,Address:ip-172-20-37-244.sa-east-1.compute.internal,},NodeAddress{Type:InternalDNS,Address:ip-172-20-37-244.sa-east-1.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-54-232-162-137.sa-east-1.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec2350fb0335a8c0068ce4bddeab7362,SystemUUID:ec2350fb-0335-a8c0-068c-e4bddeab7362,BootID:80522224-50f0-4d12-bc36-a8ad10d0e9d2,KernelVersion:5.15.0-1028-aws,OSImage:Ubuntu 22.04.1 LTS,ContainerRuntimeVersion:docker://20.10.17,KubeletVersion:v1.23.16,KubeProxyVersion:v1.23.16,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5 quay.io/cilium/cilium:v1.12.5],SizeBytes:456569528,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e k8s.gcr.io/e2e-test-images/agnhost:2.39],SizeBytes:126872991,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.23.16 registry.k8s.io/kube-proxy-amd64:v1.23.16],SizeBytes:110832791,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:f0c5de192d832e7c1daa6580d4a62e8fa6fc8eabc0917ae4cb7ed4d15e95b59e registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.1],SizeBytes:94471559,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:4e74c0492bceddc598de1c90cc5bc14dcda94cb49fa9c5bad9d117c4834b5e08 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.1],SizeBytes:56379269,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:36c31f7e1f433c9634d24f876353e8646246d81a03c4e351202c2644daff1620 k8s.gcr.io/sig-storage/csi-resizer:v1.2.0],SizeBytes:53956308,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-snapshotter@sha256:ed98431376c9e944e19a465fe8ea944806714dd95416a0821096c78d66b579bd k8s.gcr.io/sig-storage/csi-snapshotter:v4.1.1],SizeBytes:53531990,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:60ab9b3e6a030d3038c87c0d6bca2930f58d1d72823e6a4af09767dc83b696a2 k8s.gcr.io/sig-storage/csi-attacher:v3.2.1],SizeBytes:53499570,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:50249337,},ContainerImage{Names:[k8s.gcr.io/sig-storage/hostpathplugin@sha256:232fe80174d60d520d36043103853a1d7ab4b7f3782cf43e45034f04ccda58ce k8s.gcr.io/sig-storage/hostpathplugin:v1.7.1],SizeBytes:29977921,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:19582112,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:2dee3fe5fe861bb66c3a4ac51114f3447a4cd35870e0f2e2b558c7a400d89589 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0],SizeBytes:18699436,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:17611980,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:529be2c9770add0cdd0c989115222ea9fc1be430c11095eb9f6dafcf98a36e2b k8s.gcr.io/sig-storage/livenessprobe:v2.4.0],SizeBytes:17175488,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:1154361,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6 registry.k8s.io/pause:3.6],SizeBytes:682696,},},VolumesInUse:[kubernetes.io/csi/csi-hostpath-provisioning-1462^96eb72a0-a0ef-11ed-acd9-eaa109521f10],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-1462^96eb72a0-a0ef-11ed-acd9-eaa109521f10,DevicePath:,},},Config:nil,},} Jan 30 22:48:50.542: INFO: Logging kubelet events for node ip-172-20-37-244.sa-east-1.compute.internal Jan 30 22:48:50.687: INFO: Logging pods the kubelet thinks is on node ip-172-20-37-244.sa-east-1.compute.internal Jan 30 22:48:50.981: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-x4sjr started at 2023-01-30 22:43:43 +0000 UTC (0+1 container statuses recorded) Jan 30 22:48:50.981: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: false, restart count 0 Jan 30 22:48:50.981: INFO: ss2-1 started at 2023-01-30 22:43:37 +0000 UTC (0+1 container statuses recorded) Jan 30 22:48:50.981: INFO: Container webserver ready: true, restart count 0 Jan 30 22:48:50.981: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-5jrwl started at 2023-01-30 22:43:42 +0000 UTC (0+1 container statuses recorded) Jan 30 22:48:50.981: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:48:50.981: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-g7dxk started at 2023-01-30 22:43:42 +0000 UTC (0+1 container statuses recorded) Jan 30 22:48:50.981: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:48:50.981: INFO: ebs-csi-node-wwnfq started at 2023-01-30 22:39:09 +0000 UTC (0+3 container statuses recorded) Jan 30 22:48:50.981: INFO: Container ebs-plugin ready: true, restart count 0 Jan 30 22:48:50.981: INFO: Container liveness-probe ready: true, restart count 0 Jan 30 22:48:50.981: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 30 22:48:50.981: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-7jbrf started at 2023-01-30 22:43:43 +0000 UTC (0+1 container statuses recorded) Jan 30 22:48:50.981: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: false, restart count 0 Jan 30 22:48:50.981: INFO: hostpath-symlink-prep-provisioning-9122 started at 2023-01-30 22:43:45 +0000 UTC (0+1 container statuses recorded) Jan 30 22:48:50.981: INFO: Container init-volume-provisioning-9122 ready: false, restart count 0 Jan 30 22:48:50.981: INFO: pod-subpath-test-dynamicpv-r9tf started at 2023-01-30 22:44:03 +0000 UTC (1+1 container statuses recorded) Jan 30 22:48:50.981: INFO: Init container init-volume-dynamicpv-r9tf ready: false, restart count 0 Jan 30 22:48:50.981: INFO: Container test-container-subpath-dynamicpv-r9tf ready: false, restart count 0 Jan 30 22:48:50.981: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-vpcj2 started at 2023-01-30 22:43:43 +0000 UTC (0+1 container statuses recorded) Jan 30 22:48:50.981: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:48:50.981: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-jxx2t started at 2023-01-30 22:43:42 +0000 UTC (0+1 container statuses recorded) Jan 30 22:48:50.981: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:48:50.981: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-5qz9q started at 2023-01-30 22:43:42 +0000 UTC (0+1 container statuses recorded) Jan 30 22:48:50.981: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:48:50.981: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-tvmsg started at 2023-01-30 22:43:42 +0000 UTC (0+1 container statuses recorded) Jan 30 22:48:50.981: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:48:50.981: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-xbf2p started at 2023-01-30 22:43:43 +0000 UTC (0+1 container statuses recorded) Jan 30 22:48:50.981: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:48:50.981: INFO: netserver-0 started at 2023-01-30 22:43:48 +0000 UTC (0+1 container statuses recorded) Jan 30 22:48:50.981: INFO: Container webserver ready: false, restart count 0 Jan 30 22:48:50.981: INFO: csi-hostpathplugin-0 started at 2023-01-30 22:42:55 +0000 UTC (0+7 container statuses recorded) Jan 30 22:48:50.981: INFO: Container csi-attacher ready: true, restart count 0 Jan 30 22:48:50.981: INFO: Container csi-provisioner ready: true, restart count 0 Jan 30 22:48:50.981: INFO: Container csi-resizer ready: true, restart count 0 Jan 30 22:48:50.981: INFO: Container csi-snapshotter ready: true, restart count 0 Jan 30 22:48:50.981: INFO: Container hostpath ready: true, restart count 0 Jan 30 22:48:50.981: INFO: Container liveness-probe ready: true, restart count 0 Jan 30 22:48:50.981: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 30 22:48:50.981: INFO: hostpath-symlink-prep-volume-1943 started at 2023-01-30 22:43:58 +0000 UTC (0+1 container statuses recorded) Jan 30 22:48:50.981: INFO: Container init-volume-volume-1943 ready: false, restart count 0 Jan 30 22:48:50.981: INFO: cilium-2kmmh started at 2023-01-30 22:39:09 +0000 UTC (1+1 container statuses recorded) Jan 30 22:48:50.981: INFO: Init container clean-cilium-state ready: true, restart count 0 Jan 30 22:48:50.981: INFO: Container cilium-agent ready: true, restart count 0 Jan 30 22:48:50.981: INFO: coredns-867df8f45c-q48mf started at 2023-01-30 22:39:48 +0000 UTC (0+1 container statuses recorded) Jan 30 22:48:50.981: INFO: Container coredns ready: true, restart count 0 Jan 30 22:48:50.981: INFO: service-proxy-toggled-n48z6 started at 2023-01-30 22:43:22 +0000 UTC (0+1 container statuses recorded) Jan 30 22:48:50.981: INFO: Container service-proxy-toggled ready: true, restart count 0 Jan 30 22:48:50.981: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-jvpvp started at 2023-01-30 22:43:42 +0000 UTC (0+1 container statuses recorded) Jan 30 22:48:50.981: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:48:51.493: INFO: Latency metrics for node ip-172-20-37-244.sa-east-1.compute.internal Jan 30 22:48:51.493: INFO: Logging node info for node ip-172-20-46-143.sa-east-1.compute.internal Jan 30 22:48:51.637: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-46-143.sa-east-1.compute.internal 4ac0f2fd-a06b-4650-9c4a-c2964727bf42 5552 0 2023-01-30 22:39:07 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:sa-east-1 failure-domain.beta.kubernetes.io/zone:sa-east-1a kubelet_cleanup:true kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-46-143.sa-east-1.compute.internal kubernetes.io/os:linux kubernetes.io/role:node node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:sa-east-1a topology.kubernetes.io/region:sa-east-1 topology.kubernetes.io/zone:sa-east-1a] map[csi.volume.kubernetes.io/nodeid:{"ebs.csi.aws.com":"i-0549a01609c77b117"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kops-controller Update v1 2023-01-30 22:39:07 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/node":{}}}} } {kube-controller-manager Update v1 2023-01-30 22:39:07 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.3.0/24\"":{}}}} } {kubelet Update v1 2023-01-30 22:39:07 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {e2e.test Update v1 2023-01-30 22:43:41 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubelet_cleanup":{}}}} } {kubelet Update v1 2023-01-30 22:45:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.3.0/24,DoNotUseExternalID:,ProviderID:aws:///sa-east-1a/i-0549a01609c77b117,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{49753808896 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4051652608 0} {<nil>} 3956692Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{44778427933 0} {<nil>} 44778427933 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3946795008 0} {<nil>} 3854292Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-30 22:45:46 +0000 UTC,LastTransitionTime:2023-01-30 22:39:07 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-30 22:45:46 +0000 UTC,LastTransitionTime:2023-01-30 22:39:07 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-30 22:45:46 +0000 UTC,LastTransitionTime:2023-01-30 22:39:07 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-30 22:45:46 +0000 UTC,LastTransitionTime:2023-01-30 22:39:28 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.46.143,},NodeAddress{Type:ExternalIP,Address:18.230.23.25,},NodeAddress{Type:Hostname,Address:ip-172-20-46-143.sa-east-1.compute.internal,},NodeAddress{Type:InternalDNS,Address:ip-172-20-46-143.sa-east-1.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-18-230-23-25.sa-east-1.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec25bac6007d23dab6609e76a6663500,SystemUUID:ec25bac6-007d-23da-b660-9e76a6663500,BootID:cd72b157-4d78-4df9-997f-bab559376690,KernelVersion:5.15.0-1028-aws,OSImage:Ubuntu 22.04.1 LTS,ContainerRuntimeVersion:docker://20.10.17,KubeletVersion:v1.23.16,KubeProxyVersion:v1.23.16,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5 quay.io/cilium/cilium:v1.12.5],SizeBytes:456569528,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:20f25f275d46aa728f7615a1ccc19c78b2ed89435bf943a44b339f70f45508e6 k8s.gcr.io/e2e-test-images/httpd:2.4.39-2],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e k8s.gcr.io/e2e-test-images/agnhost:2.39],SizeBytes:126872991,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.23.16 registry.k8s.io/kube-proxy-amd64:v1.23.16],SizeBytes:110832791,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:f0c5de192d832e7c1daa6580d4a62e8fa6fc8eabc0917ae4cb7ed4d15e95b59e registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.1],SizeBytes:94471559,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:19582112,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:17611980,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 k8s.gcr.io/e2e-test-images/nginx:1.14-2],SizeBytes:16032814,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:1154361,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6 registry.k8s.io/pause:3.6],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 30 22:48:51.638: INFO: Logging kubelet events for node ip-172-20-46-143.sa-east-1.compute.internal Jan 30 22:48:51.784: INFO: Logging pods the kubelet thinks is on node ip-172-20-46-143.sa-east-1.compute.internal Jan 30 22:48:51.936: INFO: ss2-2 started at 2023-01-30 22:44:16 +0000 UTC (0+1 container statuses recorded) Jan 30 22:48:51.936: INFO: Container webserver ready: true, restart count 0 Jan 30 22:48:51.936: INFO: service-proxy-toggled-skcf6 started at 2023-01-30 22:43:22 +0000 UTC (0+1 container statuses recorded) Jan 30 22:48:51.936: INFO: Container service-proxy-toggled ready: true, restart count 0 Jan 30 22:48:51.936: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-2g6gh started at 2023-01-30 22:43:42 +0000 UTC (0+1 container statuses recorded) Jan 30 22:48:51.936: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: false, restart count 0 Jan 30 22:48:51.936: INFO: pod-subpath-test-preprovisionedpv-28m7 started at 2023-01-30 22:44:45 +0000 UTC (2+1 container statuses recorded) Jan 30 22:48:51.936: INFO: Init container init-volume-preprovisionedpv-28m7 ready: false, restart count 0 Jan 30 22:48:51.936: INFO: Init container test-init-volume-preprovisionedpv-28m7 ready: false, restart count 0 Jan 30 22:48:51.936: INFO: Container test-container-subpath-preprovisionedpv-28m7 ready: false, restart count 0 Jan 30 22:48:51.936: INFO: busybox-a7af2acc-d391-47b9-a765-65f447f16b43 started at 2023-01-30 22:47:38 +0000 UTC (0+1 container statuses recorded) Jan 30 22:48:51.936: INFO: Container busybox ready: false, restart count 0 Jan 30 22:48:51.936: INFO: hostexec-ip-172-20-46-143.sa-east-1.compute.internal-n2fmm started at 2023-01-30 22:44:28 +0000 UTC (0+1 container statuses recorded) Jan 30 22:48:51.936: INFO: Container agnhost-container ready: true, restart count 0 Jan 30 22:48:51.936: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-8w4h2 started at 2023-01-30 22:43:42 +0000 UTC (0+1 container statuses recorded) Jan 30 22:48:51.936: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:48:51.936: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-4cqsj started at 2023-01-30 22:43:43 +0000 UTC (0+1 container statuses recorded) Jan 30 22:48:51.936: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:48:51.936: INFO: netserver-1 started at 2023-01-30 22:43:48 +0000 UTC (0+1 container statuses recorded) Jan 30 22:48:51.936: INFO: Container webserver ready: true, restart count 0 Jan 30 22:48:51.936: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-dw5bz started at 2023-01-30 22:43:42 +0000 UTC (0+1 container statuses recorded) Jan 30 22:48:51.936: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:48:51.936: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-jh598 started at 2023-01-30 22:43:42 +0000 UTC (0+1 container statuses recorded) Jan 30 22:48:51.936: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:48:51.936: INFO: ss2-0 started at 2023-01-30 22:46:20 +0000 UTC (0+1 container statuses recorded) Jan 30 22:48:51.936: INFO: Container webserver ready: false, restart count 0 Jan 30 22:48:51.936: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-2sw7v started at 2023-01-30 22:43:43 +0000 UTC (0+1 container statuses recorded) Jan 30 22:48:51.936: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:48:51.936: INFO: service-proxy-disabled-qfmmj started at 2023-01-30 22:42:49 +0000 UTC (0+1 container statuses recorded) Jan 30 22:48:51.936: INFO: Container service-proxy-disabled ready: true, restart count 0 Jan 30 22:48:51.936: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-q6pfk started at 2023-01-30 22:43:42 +0000 UTC (0+1 container statuses recorded) Jan 30 22:48:51.936: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:48:51.936: INFO: cilium-m624g started at 2023-01-30 22:39:08 +0000 UTC (1+1 container statuses recorded) Jan 30 22:48:51.936: INFO: Init container clean-cilium-state ready: true, restart count 0 Jan 30 22:48:51.936: INFO: Container cilium-agent ready: true, restart count 0 Jan 30 22:48:51.936: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-bxpzz started at 2023-01-30 22:43:42 +0000 UTC (0+1 container statuses recorded) Jan 30 22:48:51.936: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:48:51.936: INFO: ebs-csi-node-qjvfh started at 2023-01-30 22:39:08 +0000 UTC (0+3 container statuses recorded) Jan 30 22:48:51.936: INFO: Container ebs-plugin ready: true, restart count 0 Jan 30 22:48:51.936: INFO: Container liveness-probe ready: true, restart count 0 Jan 30 22:48:51.936: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 30 22:48:51.936: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-pt29g started at 2023-01-30 22:43:42 +0000 UTC (0+1 container statuses recorded) Jan 30 22:48:51.936: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:48:51.936: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-2sx9d started at 2023-01-30 22:43:43 +0000 UTC (0+1 container statuses recorded) Jan 30 22:48:51.936: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: false, restart count 0 Jan 30 22:48:52.624: INFO: Latency metrics for node ip-172-20-46-143.sa-east-1.compute.internal Jan 30 22:48:52.624: INFO: Logging node info for node ip-172-20-56-33.sa-east-1.compute.internal Jan 30 22:48:52.767: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-56-33.sa-east-1.compute.internal 954986f9-8a0c-45d3-a91c-b10fd929b91d 4459 0 2023-01-30 22:39:06 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:sa-east-1 failure-domain.beta.kubernetes.io/zone:sa-east-1a kubelet_cleanup:true kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-56-33.sa-east-1.compute.internal kubernetes.io/os:linux kubernetes.io/role:node node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:sa-east-1a topology.kubernetes.io/region:sa-east-1 topology.kubernetes.io/zone:sa-east-1a] map[csi.volume.kubernetes.io/nodeid:{"ebs.csi.aws.com":"i-09e0b8ffb97d8ede2"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kops-controller Update v1 2023-01-30 22:39:06 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/node":{}}}} } {kubelet Update v1 2023-01-30 22:39:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-30 22:39:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.1.0/24\"":{}}}} } {e2e.test Update v1 2023-01-30 22:43:42 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubelet_cleanup":{}}}} } {kubelet Update v1 2023-01-30 22:44:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.1.0/24,DoNotUseExternalID:,ProviderID:aws:///sa-east-1a/i-09e0b8ffb97d8ede2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{49753808896 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4051644416 0} {<nil>} 3956684Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{44778427933 0} {<nil>} 44778427933 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3946786816 0} {<nil>} 3854284Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-30 22:44:12 +0000 UTC,LastTransitionTime:2023-01-30 22:39:06 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-30 22:44:12 +0000 UTC,LastTransitionTime:2023-01-30 22:39:06 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-30 22:44:12 +0000 UTC,LastTransitionTime:2023-01-30 22:39:06 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-30 22:44:12 +0000 UTC,LastTransitionTime:2023-01-30 22:39:26 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.56.33,},NodeAddress{Type:ExternalIP,Address:54.233.226.185,},NodeAddress{Type:Hostname,Address:ip-172-20-56-33.sa-east-1.compute.internal,},NodeAddress{Type:InternalDNS,Address:ip-172-20-56-33.sa-east-1.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-54-233-226-185.sa-east-1.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec2d7bffa4e33f064a7a3db7aac73580,SystemUUID:ec2d7bff-a4e3-3f06-4a7a-3db7aac73580,BootID:749c0ee0-ccbf-48a5-9702-baf2673813b3,KernelVersion:5.15.0-1028-aws,OSImage:Ubuntu 22.04.1 LTS,ContainerRuntimeVersion:docker://20.10.17,KubeletVersion:v1.23.16,KubeProxyVersion:v1.23.16,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5 quay.io/cilium/cilium:v1.12.5],SizeBytes:456569528,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e k8s.gcr.io/e2e-test-images/agnhost:2.39],SizeBytes:126872991,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.23.16 registry.k8s.io/kube-proxy-amd64:v1.23.16],SizeBytes:110832791,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:f0c5de192d832e7c1daa6580d4a62e8fa6fc8eabc0917ae4cb7ed4d15e95b59e registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.1],SizeBytes:94471559,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:50249337,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:68d396900aeaa072c1f27289485fdac29834045a6f3ffe369bf389d830ef572d registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.6],SizeBytes:47910609,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:19582112,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:17611980,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 k8s.gcr.io/e2e-test-images/nginx:1.14-2],SizeBytes:16032814,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:1154361,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6 registry.k8s.io/pause:3.6],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 30 22:48:52.768: INFO: Logging kubelet events for node ip-172-20-56-33.sa-east-1.compute.internal Jan 30 22:48:52.915: INFO: Logging pods the kubelet thinks is on node ip-172-20-56-33.sa-east-1.compute.internal Jan 30 22:48:53.068: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-g95mq started at 2023-01-30 22:43:42 +0000 UTC (0+1 container statuses recorded) Jan 30 22:48:53.068: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:48:53.068: INFO: cilium-rrh22 started at 2023-01-30 22:39:07 +0000 UTC (1+1 container statuses recorded) Jan 30 22:48:53.068: INFO: Init container clean-cilium-state ready: true, restart count 0 Jan 30 22:48:53.068: INFO: Container cilium-agent ready: true, restart count 0 Jan 30 22:48:53.068: INFO: coredns-autoscaler-557ccb4c66-vs6br started at 2023-01-30 22:39:26 +0000 UTC (0+1 container statuses recorded) Jan 30 22:48:53.068: INFO: Container autoscaler ready: true, restart count 0 Jan 30 22:48:53.068: INFO: fail-once-non-local-7gtm7 started at 2023-01-30 22:44:15 +0000 UTC (0+1 container statuses recorded) Jan 30 22:48:53.068: INFO: Container c ready: false, restart count 0 Jan 30 22:48:53.068: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-rqjpx started at 2023-01-30 22:43:42 +0000 UTC (0+1 container statuses recorded) Jan 30 22:48:53.068: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:48:53.068: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-fmwp2 started at 2023-01-30 22:43:42 +0000 UTC (0+1 container statuses recorded) Jan 30 22:48:53.068: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:48:53.068: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-z46zz started at 2023-01-30 22:43:42 +0000 UTC (0+1 container statuses recorded) Jan 30 22:48:53.068: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: false, restart count 0 Jan 30 22:48:53.068: INFO: netserver-2 started at 2023-01-30 22:43:48 +0000 UTC (0+1 container statuses recorded) Jan 30 22:48:53.068: INFO: Container webserver ready: false, restart count 0 Jan 30 22:48:53.068: INFO: ss2-0 started at 2023-01-30 22:42:49 +0000 UTC (0+1 container statuses recorded) Jan 30 22:48:53.068: INFO: Container webserver ready: true, restart count 0 Jan 30 22:48:53.068: INFO: fail-once-non-local-nvn9l started at 2023-01-30 22:43:53 +0000 UTC (0+1 container statuses recorded) Jan 30 22:48:53.068: INFO: Container c ready: false, restart count 0 Jan 30 22:48:53.068: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-cxdvn started at 2023-01-30 22:43:42 +0000 UTC (0+1 container statuses recorded) Jan 30 22:48:53.068: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:48:53.068: INFO: fail-once-non-local-ksmfx started at 2023-01-30 22:43:53 +0000 UTC (0+1 container statuses recorded) Jan 30 22:48:53.068: INFO: Container c ready: false, restart count 0 Jan 30 22:48:53.068: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-rj6w6 started at 2023-01-30 22:43:43 +0000 UTC (0+1 container statuses recorded) Jan 30 22:48:53.068: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: false, restart count 0 Jan 30 22:48:53.068: INFO: ebs-csi-node-846kf started at 2023-01-30 22:39:07 +0000 UTC (0+3 container statuses recorded) Jan 30 22:48:53.068: INFO: Container ebs-plugin ready: true, restart count 0 Jan 30 22:48:53.068: INFO: Container liveness-probe ready: true, restart count 0 Jan 30 22:48:53.068: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 30 22:48:53.068: INFO: coredns-867df8f45c-txv2h started at 2023-01-30 22:39:26 +0000 UTC (0+1 container statuses recorded) Jan 30 22:48:53.068: INFO: Container coredns ready: true, restart count 0 Jan 30 22:48:53.068: INFO: service-proxy-disabled-tpm2q started at 2023-01-30 22:42:49 +0000 UTC (0+1 container statuses recorded) Jan 30 22:48:53.068: INFO: Container service-proxy-disabled ready: true, restart count 0 Jan 30 22:48:53.068: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-k9mvg started at 2023-01-30 22:43:43 +0000 UTC (0+1 container statuses recorded) Jan 30 22:48:53.068: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:48:53.068: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-ctp24 started at 2023-01-30 22:43:42 +0000 UTC (0+1 container statuses recorded) Jan 30 22:48:53.068: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:48:53.068: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-hx4t7 started at 2023-01-30 22:43:42 +0000 UTC (0+1 container statuses recorded) Jan 30 22:48:53.068: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:48:53.068: INFO: csi-hostpathplugin-0 started at 2023-01-30 22:45:43 +0000 UTC (0+7 container statuses recorded) Jan 30 22:48:53.068: INFO: Container csi-attacher ready: false, restart count 0 Jan 30 22:48:53.068: INFO: Container csi-provisioner ready: false, restart count 0 Jan 30 22:48:53.068: INFO: Container csi-resizer ready: false, restart count 0 Jan 30 22:48:53.068: INFO: Container csi-snapshotter ready: false, restart count 0 Jan 30 22:48:53.069: INFO: Container hostpath ready: false, restart count 0 Jan 30 22:48:53.069: INFO: Container liveness-probe ready: false, restart count 0 Jan 30 22:48:53.069: INFO: Container node-driver-registrar ready: false, restart count 0 Jan 30 22:48:53.069: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-kl9wl started at 2023-01-30 22:43:43 +0000 UTC (0+1 container statuses recorded) Jan 30 22:48:53.069: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:48:53.533: INFO: Latency metrics for node ip-172-20-56-33.sa-east-1.compute.internal Jan 30 22:48:53.533: INFO: Logging node info for node ip-172-20-63-44.sa-east-1.compute.internal Jan 30 22:48:53.676: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-63-44.sa-east-1.compute.internal f7fcefff-e13d-4383-8796-cdc02ac9be26 4723 0 2023-01-30 22:37:10 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:c5.large beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:sa-east-1 failure-domain.beta.kubernetes.io/zone:sa-east-1a kops.k8s.io/kops-controller-pki: kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-63-44.sa-east-1.compute.internal kubernetes.io/os:linux kubernetes.io/role:master node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers: node.kubernetes.io/instance-type:c5.large topology.ebs.csi.aws.com/zone:sa-east-1a topology.kubernetes.io/region:sa-east-1 topology.kubernetes.io/zone:sa-east-1a] map[csi.volume.kubernetes.io/nodeid:{"ebs.csi.aws.com":"i-020b2e4354e67a776"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-30 22:37:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {protokube Update v1 2023-01-30 22:37:31 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kops.k8s.io/kops-controller-pki":{},"f:node-role.kubernetes.io/control-plane":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}} } {kops-controller Update v1 2023-01-30 22:37:33 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/master":{}}}} } {kube-controller-manager Update v1 2023-01-30 22:37:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.0.0/24\"":{}},"f:taints":{}}} } {kubelet Update v1 2023-01-30 22:38:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.0.0/24,DoNotUseExternalID:,ProviderID:aws:///sa-east-1a/i-020b2e4354e67a776,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[100.96.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{49753808896 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3862913024 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{44778427933 0} {<nil>} 44778427933 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3758055424 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-30 22:44:30 +0000 UTC,LastTransitionTime:2023-01-30 22:37:10 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-30 22:44:30 +0000 UTC,LastTransitionTime:2023-01-30 22:37:10 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-30 22:44:30 +0000 UTC,LastTransitionTime:2023-01-30 22:37:10 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-30 22:44:30 +0000 UTC,LastTransitionTime:2023-01-30 22:37:51 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.63.44,},NodeAddress{Type:ExternalIP,Address:18.230.69.200,},NodeAddress{Type:Hostname,Address:ip-172-20-63-44.sa-east-1.compute.internal,},NodeAddress{Type:InternalDNS,Address:ip-172-20-63-44.sa-east-1.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-18-230-69-200.sa-east-1.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec2866a629da92bef6391329a4d3d367,SystemUUID:ec2866a6-29da-92be-f639-1329a4d3d367,BootID:a943fe41-4bc4-4772-98e1-0ba5a25bcb7f,KernelVersion:5.15.0-1028-aws,OSImage:Ubuntu 22.04.1 LTS,ContainerRuntimeVersion:docker://20.10.17,KubeletVersion:v1.23.16,KubeProxyVersion:v1.23.16,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/etcdadm/etcd-manager@sha256:66a453db625abb268f4b3bbefc5a34a171d81e6e8796cecca54cfd71775c77c4 registry.k8s.io/etcdadm/etcd-manager:v3.0.20221209],SizeBytes:662209237,},ContainerImage{Names:[quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5 quay.io/cilium/cilium:v1.12.5],SizeBytes:456569528,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver-amd64:v1.23.16 registry.k8s.io/kube-apiserver-amd64:v1.23.16],SizeBytes:129999849,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager-amd64:v1.23.16 registry.k8s.io/kube-controller-manager-amd64:v1.23.16],SizeBytes:119940367,},ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.23.16 registry.k8s.io/kube-proxy-amd64:v1.23.16],SizeBytes:110832791,},ContainerImage{Names:[quay.io/cilium/operator@sha256:a6d24a006a6b92967ac90786b49bc1ac26e5477cf028cd1186efcfc2466484db quay.io/cilium/operator:v1.12.5],SizeBytes:110016956,},ContainerImage{Names:[registry.k8s.io/kops/kops-controller:1.26.0-beta.2],SizeBytes:106139107,},ContainerImage{Names:[registry.k8s.io/kops/dns-controller:1.26.0-beta.2],SizeBytes:102637092,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:f0c5de192d832e7c1daa6580d4a62e8fa6fc8eabc0917ae4cb7ed4d15e95b59e registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.1],SizeBytes:94471559,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:122bfb8c1edabb3c0edd63f06523e6940d958d19b3957dc7b1d6f81e9f1f6119 registry.k8s.io/sig-storage/csi-provisioner:v3.1.0],SizeBytes:57736070,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:9ebbf9f023e7b41ccee3d52afe39a89e3ddacdbb69269d583abfc25847cfd9e4 registry.k8s.io/sig-storage/csi-resizer:v1.4.0],SizeBytes:55464778,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:8b9c313c05f54fb04f8d430896f5f5904b6cb157df261501b29adc04d2b2dc7b registry.k8s.io/sig-storage/csi-attacher:v3.4.0],SizeBytes:54848595,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler-amd64:v1.23.16 registry.k8s.io/kube-scheduler-amd64:v1.23.16],SizeBytes:51852546,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:19582112,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:17611980,},ContainerImage{Names:[registry.k8s.io/kops/kube-apiserver-healthcheck:1.26.0-beta.2],SizeBytes:8786911,},ContainerImage{Names:[registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause:3.6],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 30 22:48:53.676: INFO: Logging kubelet events for node ip-172-20-63-44.sa-east-1.compute.internal Jan 30 22:48:53.822: INFO: Logging pods the kubelet thinks is on node ip-172-20-63-44.sa-east-1.compute.internal Jan 30 22:48:53.975: INFO: etcd-manager-events-ip-172-20-63-44.sa-east-1.compute.internal started at 2023-01-30 22:36:20 +0000 UTC (0+1 container statuses recorded) Jan 30 22:48:53.975: INFO: Container etcd-manager ready: true, restart count 0 Jan 30 22:48:53.975: INFO: etcd-manager-main-ip-172-20-63-44.sa-east-1.compute.internal started at 2023-01-30 22:36:20 +0000 UTC (0+1 container statuses recorded) Jan 30 22:48:53.975: INFO: Container etcd-manager ready: true, restart count 0 Jan 30 22:48:53.975: INFO: kube-scheduler-ip-172-20-63-44.sa-east-1.compute.internal started at 2023-01-30 22:36:20 +0000 UTC (0+1 container statuses recorded) Jan 30 22:48:53.975: INFO: Container kube-scheduler ready: true, restart count 0 Jan 30 22:48:53.975: INFO: ebs-csi-node-crhx2 started at 2023-01-30 22:37:30 +0000 UTC (0+3 container statuses recorded) Jan 30 22:48:53.975: INFO: Container ebs-plugin ready: true, restart count 0 Jan 30 22:48:53.975: INFO: Container liveness-probe ready: true, restart count 0 Jan 30 22:48:53.975: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 30 22:48:53.975: INFO: cilium-bg2hw started at 2023-01-30 22:37:30 +0000 UTC (1+1 container statuses recorded) Jan 30 22:48:53.975: INFO: Init container clean-cilium-state ready: true, restart count 0 Jan 30 22:48:53.975: INFO: Container cilium-agent ready: true, restart count 0 Jan 30 22:48:53.975: INFO: ebs-csi-controller-6dbc9bb9b4-zt6h6 started at 2023-01-30 22:37:32 +0000 UTC (0+5 container statuses recorded) Jan 30 22:48:53.975: INFO: Container csi-attacher ready: true, restart count 0 Jan 30 22:48:53.975: INFO: Container csi-provisioner ready: true, restart count 0 Jan 30 22:48:53.975: INFO: Container csi-resizer ready: true, restart count 0 Jan 30 22:48:53.975: INFO: Container ebs-plugin ready: true, restart count 0 Jan 30 22:48:53.975: INFO: Container liveness-probe ready: true, restart count 0 Jan 30 22:48:53.975: INFO: kube-apiserver-ip-172-20-63-44.sa-east-1.compute.internal started at 2023-01-30 22:36:20 +0000 UTC (0+2 container statuses recorded) Jan 30 22:48:53.975: INFO: Container healthcheck ready: true, restart count 0 Jan 30 22:48:53.975: INFO: Container kube-apiserver ready: true, restart count 1 Jan 30 22:48:53.975: INFO: kube-controller-manager-ip-172-20-63-44.sa-east-1.compute.internal started at 2023-01-30 22:36:20 +0000 UTC (0+1 container statuses recorded) Jan 30 22:48:53.975: INFO: Container kube-controller-manager ready: true, restart count 2 Jan 30 22:48:53.975: INFO: kops-controller-mrlzz started at 2023-01-30 22:37:31 +0000 UTC (0+1 container statuses recorded) Jan 30 22:48:53.975: INFO: Container kops-controller ready: true, restart count 0 Jan 30 22:48:53.975: INFO: dns-controller-58d7bbb845-vwkl6 started at 2023-01-30 22:37:32 +0000 UTC (0+1 container statuses recorded) Jan 30 22:48:53.975: INFO: Container dns-controller ready: true, restart count 0 Jan 30 22:48:53.975: INFO: cilium-operator-c7bfc9f44-bhw9j started at 2023-01-30 22:37:32 +0000 UTC (0+1 container statuses recorded) Jan 30 22:48:53.975: INFO: Container cilium-operator ready: true, restart count 0 Jan 30 22:48:54.448: INFO: Latency metrics for node ip-172-20-63-44.sa-east-1.compute.internal Jan 30 22:48:54.448: INFO: Logging node info for node ip-172-20-63-7.sa-east-1.compute.internal Jan 30 22:48:54.593: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-63-7.sa-east-1.compute.internal 8ee09ce8-ad2c-4347-b6b0-a38439fe8b38 5834 0 2023-01-30 22:39:06 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:sa-east-1 failure-domain.beta.kubernetes.io/zone:sa-east-1a kubelet_cleanup:true kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-63-7.sa-east-1.compute.internal kubernetes.io/os:linux kubernetes.io/role:node node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:sa-east-1a topology.hostpath.csi/node:ip-172-20-63-7.sa-east-1.compute.internal topology.kubernetes.io/region:sa-east-1 topology.kubernetes.io/zone:sa-east-1a] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-ephemeral-5132":"ip-172-20-63-7.sa-east-1.compute.internal","csi-mock-csi-mock-volumes-6130":"csi-mock-csi-mock-volumes-6130","ebs.csi.aws.com":"i-02d1af952f8cb9055"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kops-controller Update v1 2023-01-30 22:39:06 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/node":{}}}} } {kube-controller-manager Update v1 2023-01-30 22:39:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.2.0/24\"":{}}}} } {kubelet Update v1 2023-01-30 22:39:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {e2e.test Update v1 2023-01-30 22:43:42 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubelet_cleanup":{}}}} } {kube-controller-manager Update v1 2023-01-30 22:44:35 +0000 UTC FieldsV1 {"f:status":{"f:volumesAttached":{}}} status} {kubelet Update v1 2023-01-30 22:45:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.2.0/24,DoNotUseExternalID:,ProviderID:aws:///sa-east-1a/i-02d1af952f8cb9055,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{49753808896 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4051644416 0} {<nil>} 3956684Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{44778427933 0} {<nil>} 44778427933 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3946786816 0} {<nil>} 3854284Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-30 22:46:47 +0000 UTC,LastTransitionTime:2023-01-30 22:39:06 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-30 22:46:47 +0000 UTC,LastTransitionTime:2023-01-30 22:39:06 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-30 22:46:47 +0000 UTC,LastTransitionTime:2023-01-30 22:39:06 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-30 22:46:47 +0000 UTC,LastTransitionTime:2023-01-30 22:39:27 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.63.7,},NodeAddress{Type:ExternalIP,Address:52.67.57.31,},NodeAddress{Type:Hostname,Address:ip-172-20-63-7.sa-east-1.compute.internal,},NodeAddress{Type:InternalDNS,Address:ip-172-20-63-7.sa-east-1.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-52-67-57-31.sa-east-1.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec292dc0bb9ad655da1bd5cf4f054caa,SystemUUID:ec292dc0-bb9a-d655-da1b-d5cf4f054caa,BootID:3aa9a5e0-6628-460f-859b-942e6b19dc1d,KernelVersion:5.15.0-1028-aws,OSImage:Ubuntu 22.04.1 LTS,ContainerRuntimeVersion:docker://20.10.17,KubeletVersion:v1.23.16,KubeProxyVersion:v1.23.16,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5 quay.io/cilium/cilium:v1.12.5],SizeBytes:456569528,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e k8s.gcr.io/e2e-test-images/agnhost:2.39],SizeBytes:126872991,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.23.16 registry.k8s.io/kube-proxy-amd64:v1.23.16],SizeBytes:110832791,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:f0c5de192d832e7c1daa6580d4a62e8fa6fc8eabc0917ae4cb7ed4d15e95b59e registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.1],SizeBytes:94471559,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0],SizeBytes:56382329,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:4e74c0492bceddc598de1c90cc5bc14dcda94cb49fa9c5bad9d117c4834b5e08 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.1],SizeBytes:56379269,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:36c31f7e1f433c9634d24f876353e8646246d81a03c4e351202c2644daff1620 k8s.gcr.io/sig-storage/csi-resizer:v1.2.0],SizeBytes:53956308,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-snapshotter@sha256:ed98431376c9e944e19a465fe8ea944806714dd95416a0821096c78d66b579bd k8s.gcr.io/sig-storage/csi-snapshotter:v4.1.1],SizeBytes:53531990,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:c5be65d6679efabb969d9b019300d187437ae876f992c40911fd2892bbef3b36 k8s.gcr.io/sig-storage/csi-attacher:v3.2.0],SizeBytes:53500826,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:60ab9b3e6a030d3038c87c0d6bca2930f58d1d72823e6a4af09767dc83b696a2 k8s.gcr.io/sig-storage/csi-attacher:v3.2.1],SizeBytes:53499570,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:7a5ba58a44e0d749e0767e4e37315bcf6a61f33ce3185c1991848af4db0fb70a k8s.gcr.io/sig-storage/csi-resizer:v1.1.0],SizeBytes:49176472,},ContainerImage{Names:[k8s.gcr.io/sig-storage/hostpathplugin@sha256:232fe80174d60d520d36043103853a1d7ab4b7f3782cf43e45034f04ccda58ce k8s.gcr.io/sig-storage/hostpathplugin:v1.7.1],SizeBytes:29977921,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:19662887,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:19582112,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:2dee3fe5fe861bb66c3a4ac51114f3447a4cd35870e0f2e2b558c7a400d89589 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0],SizeBytes:18699436,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:17680993,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:17611980,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:529be2c9770add0cdd0c989115222ea9fc1be430c11095eb9f6dafcf98a36e2b k8s.gcr.io/sig-storage/livenessprobe:v2.4.0],SizeBytes:17175488,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:1154361,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6 registry.k8s.io/pause:3.6],SizeBytes:682696,},},VolumesInUse:[kubernetes.io/csi/csi-mock-csi-mock-volumes-6130^4],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-mock-csi-mock-volumes-6130^4,DevicePath:,},},Config:nil,},} Jan 30 22:48:54.594: INFO: Logging kubelet events for node ip-172-20-63-7.sa-east-1.compute.internal Jan 30 22:48:54.743: INFO: Logging pods the kubelet thinks is on node ip-172-20-63-7.sa-east-1.compute.internal Jan 30 22:48:54.921: INFO: inline-volume-tester-62nrc started at 2023-01-30 22:43:42 +0000 UTC (0+1 container statuses recorded) Jan 30 22:48:54.921: INFO: Container csi-volume-tester ready: false, restart count 0 Jan 30 22:48:54.921: INFO: csi-mockplugin-0 started at 2023-01-30 22:42:54 +0000 UTC (0+3 container statuses recorded) Jan 30 22:48:54.921: INFO: Container csi-provisioner ready: true, restart count 0 Jan 30 22:48:54.921: INFO: Container driver-registrar ready: true, restart count 0 Jan 30 22:48:54.921: INFO: Container mock ready: true, restart count 0 Jan 30 22:48:54.921: INFO: service-proxy-toggled-bpxnj started at 2023-01-30 22:43:22 +0000 UTC (0+1 container statuses recorded) Jan 30 22:48:54.921: INFO: Container service-proxy-toggled ready: true, restart count 0 Jan 30 22:48:54.921: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-fq2r6 started at 2023-01-30 22:43:42 +0000 UTC (0+1 container statuses recorded) Jan 30 22:48:54.921: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:48:54.921: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-v8ln8 started at 2023-01-30 22:43:43 +0000 UTC (0+1 container statuses recorded) Jan 30 22:48:54.921: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:48:54.921: INFO: pod-138c829e-4290-4f54-9c1c-089f02bec4b6 started at 2023-01-30 22:43:57 +0000 UTC (0+1 container statuses recorded) Jan 30 22:48:54.921: INFO: Container write-pod ready: false, restart count 0 Jan 30 22:48:54.921: INFO: csi-hostpathplugin-0 started at 2023-01-30 22:44:07 +0000 UTC (0+7 container statuses recorded) Jan 30 22:48:54.921: INFO: Container csi-attacher ready: false, restart count 0 Jan 30 22:48:54.921: INFO: Container csi-provisioner ready: false, restart count 0 Jan 30 22:48:54.921: INFO: Container csi-resizer ready: false, restart count 0 Jan 30 22:48:54.921: INFO: Container csi-snapshotter ready: false, restart count 0 Jan 30 22:48:54.921: INFO: Container hostpath ready: false, restart count 0 Jan 30 22:48:54.921: INFO: Container liveness-probe ready: false, restart count 0 Jan 30 22:48:54.921: INFO: Container node-driver-registrar ready: false, restart count 0 Jan 30 22:48:54.921: INFO: service-proxy-disabled-4zt6p started at 2023-01-30 22:42:48 +0000 UTC (0+1 container statuses recorded) Jan 30 22:48:54.921: INFO: Container service-proxy-disabled ready: true, restart count 0 Jan 30 22:48:54.921: INFO: pod-projected-configmaps-a0769f34-6dfb-4a2c-80f6-78bd453780c5 started at 2023-01-30 22:44:32 +0000 UTC (0+1 container statuses recorded) Jan 30 22:48:54.921: INFO: Container agnhost-container ready: false, restart count 0 Jan 30 22:48:54.921: INFO: pod-handle-http-request started at 2023-01-30 22:44:58 +0000 UTC (0+1 container statuses recorded) Jan 30 22:48:54.921: INFO: Container agnhost-container ready: false, restart count 0 Jan 30 22:48:54.921: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-cnpbk started at 2023-01-30 22:43:42 +0000 UTC (0+1 container statuses recorded) Jan 30 22:48:54.921: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: false, restart count 0 Jan 30 22:48:54.921: INFO: hostexec-ip-172-20-63-7.sa-east-1.compute.internal-l6ph6 started at 2023-01-30 22:44:13 +0000 UTC (0+1 container statuses recorded) Jan 30 22:48:54.921: INFO: Container agnhost-container ready: true, restart count 0 Jan 30 22:48:54.921: INFO: pod-subpath-test-preprovisionedpv-5bn8 started at 2023-01-30 22:44:30 +0000 UTC (1+1 container statuses recorded) Jan 30 22:48:54.921: INFO: Init container init-volume-preprovisionedpv-5bn8 ready: false, restart count 0 Jan 30 22:48:54.921: INFO: Container test-container-subpath-preprovisionedpv-5bn8 ready: false, restart count 0 Jan 30 22:48:54.921: INFO: hostexec-ip-172-20-63-7.sa-east-1.compute.internal-872lq started at 2023-01-30 22:43:35 +0000 UTC (0+1 container statuses recorded) Jan 30 22:48:54.921: INFO: Container agnhost-container ready: true, restart count 0 Jan 30 22:48:54.921: INFO: csi-mockplugin-attacher-0 started at 2023-01-30 22:42:54 +0000 UTC (0+1 container statuses recorded) Jan 30 22:48:54.921: INFO: Container csi-attacher ready: true, restart count 0 Jan 30 22:48:54.921: INFO: ss2-1 started at 2023-01-30 22:45:23 +0000 UTC (0+1 container statuses recorded) Jan 30 22:48:54.921: INFO: Container webserver ready: false, restart count 0 Jan 30 22:48:54.921: INFO: ebs-csi-node-wc6gx started at 2023-01-30 22:39:07 +0000 UTC (0+3 container statuses recorded) Jan 30 22:48:54.921: INFO: Container ebs-plugin ready: true, restart count 0 Jan 30 22:48:54.921: INFO: Container liveness-probe ready: true, restart count 0 Jan 30 22:48:54.921: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 30 22:48:54.921: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-9t9kb started at 2023-01-30 22:43:42 +0000 UTC (0+1 container statuses recorded) Jan 30 22:48:54.921: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: false, restart count 0 Jan 30 22:48:54.921: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-2m4f6 started at 2023-01-30 22:43:43 +0000 UTC (0+1 container statuses recorded) Jan 30 22:48:54.921: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: false, restart count 0 Jan 30 22:48:54.921: INFO: csi-hostpathplugin-0 started at 2023-01-30 22:43:41 +0000 UTC (0+7 container statuses recorded) Jan 30 22:48:54.921: INFO: Container csi-attacher ready: true, restart count 0 Jan 30 22:48:54.921: INFO: Container csi-provisioner ready: true, restart count 0 Jan 30 22:48:54.921: INFO: Container csi-resizer ready: true, restart count 0 Jan 30 22:48:54.921: INFO: Container csi-snapshotter ready: true, restart count 0 Jan 30 22:48:54.921: INFO: Container hostpath ready: true, restart count 0 Jan 30 22:48:54.921: INFO: Container liveness-probe ready: true, restart count 0 Jan 30 22:48:54.921: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 30 22:48:54.921: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-56dt8 started at 2023-01-30 22:43:42 +0000 UTC (0+1 container statuses recorded) Jan 30 22:48:54.922: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: false, restart count 0 Jan 30 22:48:54.922: INFO: pod-62b86a24-1d8d-45e8-9c99-6e0a651bffe1 started at 2023-01-30 22:43:40 +0000 UTC (0+1 container statuses recorded) Jan 30 22:48:54.922: INFO: Container write-pod ready: false, restart count 0 Jan 30 22:48:54.922: INFO: verify-service-up-exec-pod-p5sdh started at 2023-01-30 22:45:14 +0000 UTC (0+1 container statuses recorded) Jan 30 22:48:54.922: INFO: Container agnhost-container ready: false, restart count 0 Jan 30 22:48:54.922: INFO: hostexec-ip-172-20-63-7.sa-east-1.compute.internal-6dghb started at 2023-01-30 22:43:48 +0000 UTC (0+1 container statuses recorded) Jan 30 22:48:54.922: INFO: Container agnhost-container ready: true, restart count 0 Jan 30 22:48:54.922: INFO: hostexec-ip-172-20-63-7.sa-east-1.compute.internal-lx82t started at 2023-01-30 22:45:16 +0000 UTC (0+1 container statuses recorded) Jan 30 22:48:54.922: INFO: Container agnhost-container ready: true, restart count 0 Jan 30 22:48:54.922: INFO: cilium-qtf8x started at 2023-01-30 22:39:07 +0000 UTC (1+1 container statuses recorded) Jan 30 22:48:54.922: INFO: Init container clean-cilium-state ready: true, restart count 0 Jan 30 22:48:54.922: INFO: Container cilium-agent ready: true, restart count 0 Jan 30 22:48:54.922: INFO: hostexec-ip-172-20-63-7.sa-east-1.compute.internal-grfw9 started at 2023-01-30 22:45:37 +0000 UTC (0+1 container statuses recorded) Jan 30 22:48:54.922: INFO: Container agnhost-container ready: true, restart count 0 Jan 30 22:48:54.922: INFO: csi-mockplugin-resizer-0 started at 2023-01-30 22:42:54 +0000 UTC (0+1 container statuses recorded) Jan 30 22:48:54.922: INFO: Container csi-resizer ready: true, restart count 0 Jan 30 22:48:54.922: INFO: hostexec-ip-172-20-63-7.sa-east-1.compute.internal-dvn5l started at 2023-01-30 22:43:13 +0000 UTC (0+1 container statuses recorded) Jan 30 22:48:54.922: INFO: Container agnhost-container ready: true, restart count 0 Jan 30 22:48:54.922: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-nft6k started at 2023-01-30 22:43:42 +0000 UTC (0+1 container statuses recorded) Jan 30 22:48:54.922: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:48:54.922: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-zrntw started at 2023-01-30 22:43:42 +0000 UTC (0+1 container statuses recorded) Jan 30 22:48:54.922: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:48:54.922: INFO: netserver-3 started at 2023-01-30 22:43:48 +0000 UTC (0+1 container statuses recorded) Jan 30 22:48:54.922: INFO: Container webserver ready: true, restart count 0 Jan 30 22:48:54.922: INFO: pod-c28eace7-f9af-4aa6-896f-40a90618d6c5 started at 2023-01-30 22:45:23 +0000 UTC (0+1 container statuses recorded) Jan 30 22:48:54.922: INFO: Container write-pod ready: false, restart count 0 Jan 30 22:48:54.922: INFO: verify-service-up-host-exec-pod started at 2023-01-30 22:45:07 +0000 UTC (0+1 container statuses recorded) Jan 30 22:48:54.922: INFO: Container agnhost-container ready: true, restart count 0 Jan 30 22:48:54.922: INFO: pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f started at 2023-01-30 22:44:18 +0000 UTC (0+1 container statuses recorded) Jan 30 22:48:54.922: INFO: Container env-test ready: false, restart count 0 Jan 30 22:48:54.922: INFO: pod-d8cff309-3d6a-4ce5-9ac9-b57de7155461 started at 2023-01-30 22:45:48 +0000 UTC (0+1 container statuses recorded) Jan 30 22:48:54.922: INFO: Container write-pod ready: false, restart count 0 Jan 30 22:48:54.922: INFO: pvc-volume-tester-ztxmz started at 2023-01-30 22:44:35 +0000 UTC (0+1 container statuses recorded) Jan 30 22:48:54.922: INFO: Container volume-tester ready: false, restart count 0 Jan 30 22:48:54.922: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-qbtpc started at 2023-01-30 22:43:43 +0000 UTC (0+1 container statuses recorded) Jan 30 22:48:54.922: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:48:54.922: INFO: pod-subpath-test-preprovisionedpv-llc9 started at 2023-01-30 22:44:16 +0000 UTC (0+2 container statuses recorded) Jan 30 22:48:54.922: INFO: Container test-container-subpath-preprovisionedpv-llc9 ready: false, restart count 0 Jan 30 22:48:54.922: INFO: Container test-container-volume-preprovisionedpv-llc9 ready: false, restart count 0 Jan 30 22:48:54.922: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-fp6pt started at 2023-01-30 22:43:42 +0000 UTC (0+1 container statuses recorded) Jan 30 22:48:54.922: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: false, restart count 0 Jan 30 22:48:55.638: INFO: Latency metrics for node ip-172-20-63-7.sa-east-1.compute.internal Jan 30 22:48:55.639: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "nettest-421" for this suite.
Filter through log files
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-network\]\sServices\sshould\sbe\sable\sto\schange\sthe\stype\sfrom\sNodePort\sto\sExternalName\s\[Conformance\]$'
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Jan 30 22:54:05.474: Expected Service externalsvc to be running Unexpected error: <*errors.errorString | 0xc00243e8d0>: { s: "only 0 pods started out of 2", } only 0 pods started out of 2 occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:3382from junit_04.xml
[BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 30 22:49:00.615: INFO: >>> kubeConfig: /root/.kube/config �[1mSTEP�[0m: Building a namespace api object, basename services �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:752 [It] should be able to change the type from NodePort to ExternalName [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating a service nodeport-service with the type=NodePort in namespace services-5017 �[1mSTEP�[0m: Creating active service to test reachability when its FQDN is referred as externalName for another service �[1mSTEP�[0m: creating service externalsvc in namespace services-5017 �[1mSTEP�[0m: creating replication controller externalsvc in namespace services-5017 I0130 22:49:02.055636 6654 runners.go:193] Created replication controller with name: externalsvc, namespace: services-5017, replica count: 2 I0130 22:49:05.206657 6654 runners.go:193] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0130 22:49:08.207809 6654 runners.go:193] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0130 22:49:11.208882 6654 runners.go:193] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0130 22:49:14.211350 6654 runners.go:193] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0130 22:49:17.212381 6654 runners.go:193] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0130 22:49:20.212680 6654 runners.go:193] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0130 22:49:23.214243 6654 runners.go:193] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0130 22:49:26.216534 6654 runners.go:193] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0130 22:49:29.216841 6654 runners.go:193] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0130 22:49:32.217355 6654 runners.go:193] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0130 22:49:35.218728 6654 runners.go:193] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0130 22:49:38.219803 6654 runners.go:193] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0130 22:49:41.222144 6654 runners.go:193] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0130 22:49:44.223782 6654 runners.go:193] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0130 22:49:47.224023 6654 runners.go:193] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0130 22:49:50.224237 6654 runners.go:193] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0130 22:49:53.225662 6654 runners.go:193] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0130 22:49:56.227968 6654 runners.go:193] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0130 22:49:59.228330 6654 runners.go:193] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0130 22:50:02.228760 6654 runners.go:193] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0130 22:50:05.229042 6654 runners.go:193] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0130 22:50:08.230094 6654 runners.go:193] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0130 22:50:11.231839 6654 runners.go:193] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0130 22:50:14.234353 6654 runners.go:193] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0130 22:50:17.234710 6654 runners.go:193] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0130 22:50:20.235049 6654 runners.go:193] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0130 22:50:23.236658 6654 runners.go:193] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0130 22:50:26.238901 6654 runners.go:193] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0130 22:50:29.241589 6654 runners.go:193] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0130 22:50:32.241948 6654 runners.go:193] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0130 22:50:35.243835 6654 runners.go:193] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0130 22:50:38.245631 6654 runners.go:193] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0130 22:50:41.247838 6654 runners.go:193] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0130 22:50:44.250382 6654 runners.go:193] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0130 22:50:47.251491 6654 runners.go:193] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0130 22:50:50.251823 6654 runners.go:193] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0130 22:50:53.252176 6654 runners.go:193] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0130 22:50:56.252927 6654 runners.go:193] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0130 22:50:59.253484 6654 runners.go:193] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0130 22:51:02.253839 6654 runners.go:193] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0130 22:51:05.255833 6654 runners.go:193] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0130 22:51:08.257529 6654 runners.go:193] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0130 22:51:11.259765 6654 runners.go:193] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0130 22:51:14.260175 6654 runners.go:193] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0130 22:51:17.260441 6654 runners.go:193] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0130 22:51:20.262703 6654 runners.go:193] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0130 22:51:23.263012 6654 runners.go:193] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0130 22:51:26.263858 6654 runners.go:193] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0130 22:51:29.266640 6654 runners.go:193] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0130 22:51:32.267673 6654 runners.go:193] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0130 22:51:35.268008 6654 runners.go:193] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0130 22:51:38.268384 6654 runners.go:193] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0130 22:51:41.270742 6654 runners.go:193] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0130 22:51:44.271037 6654 runners.go:193] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0130 22:51:47.272171 6654 runners.go:193] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0130 22:51:50.274494 6654 runners.go:193] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0130 22:51:53.274828 6654 runners.go:193] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0130 22:51:56.277071 6654 runners.go:193] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0130 22:51:59.278989 6654 runners.go:193] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0130 22:52:02.280218 6654 runners.go:193] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0130 22:52:05.281189 6654 runners.go:193] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0130 22:52:08.282642 6654 runners.go:193] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0130 22:52:11.283870 6654 runners.go:193] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0130 22:52:14.286294 6654 runners.go:193] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0130 22:52:17.287046 6654 runners.go:193] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0130 22:52:20.289281 6654 runners.go:193] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0130 22:52:23.290700 6654 runners.go:193] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0130 22:52:26.291587 6654 runners.go:193] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0130 22:52:29.294178 6654 runners.go:193] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0130 22:52:32.294523 6654 runners.go:193] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0130 22:52:35.296960 6654 runners.go:193] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0130 22:52:38.297235 6654 runners.go:193] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0130 22:52:41.297410 6654 runners.go:193] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0130 22:52:44.299802 6654 runners.go:193] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0130 22:52:47.301029 6654 runners.go:193] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0130 22:52:50.303279 6654 runners.go:193] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0130 22:52:53.303859 6654 runners.go:193] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0130 22:52:56.304194 6654 runners.go:193] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0130 22:52:59.306825 6654 runners.go:193] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0130 22:53:02.307063 6654 runners.go:193] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0130 22:53:05.309288 6654 runners.go:193] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0130 22:53:08.309572 6654 runners.go:193] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0130 22:53:11.310922 6654 runners.go:193] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0130 22:53:14.311210 6654 runners.go:193] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0130 22:53:17.311412 6654 runners.go:193] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0130 22:53:20.313694 6654 runners.go:193] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0130 22:53:23.314768 6654 runners.go:193] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0130 22:53:26.315437 6654 runners.go:193] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0130 22:53:29.315779 6654 runners.go:193] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0130 22:53:32.316894 6654 runners.go:193] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0130 22:53:35.319213 6654 runners.go:193] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0130 22:53:38.320833 6654 runners.go:193] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0130 22:53:41.321586 6654 runners.go:193] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0130 22:53:44.323903 6654 runners.go:193] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0130 22:53:47.324872 6654 runners.go:193] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0130 22:53:50.325214 6654 runners.go:193] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0130 22:53:53.326722 6654 runners.go:193] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0130 22:53:56.327493 6654 runners.go:193] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0130 22:53:59.329864 6654 runners.go:193] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0130 22:54:02.330065 6654 runners.go:193] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0130 22:54:05.332347 6654 runners.go:193] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0130 22:54:05.474381 6654 runners.go:193] Pod externalsvc-c5qz7 ip-172-20-63-7.sa-east-1.compute.internal Pending <nil> I0130 22:54:05.474475 6654 runners.go:193] Pod externalsvc-qmzsv ip-172-20-46-143.sa-east-1.compute.internal Pending <nil> Jan 30 22:54:05.474: FAIL: Expected Service externalsvc to be running Unexpected error: <*errors.errorString | 0xc00243e8d0>: { s: "only 0 pods started out of 2", } only 0 pods started out of 2 occurred Full Stack Trace k8s.io/kubernetes/test/e2e/network.createAndGetExternalServiceFQDN({0x7938928, 0xc003ac2780}, {0xc0040613b0, 0xd}, {0x705c929, 0xb}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:3382 +0x108 k8s.io/kubernetes/test/e2e/network.glob..func24.17() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1448 +0x1b5 k8s.io/kubernetes/test/e2e.RunE2ETests(0x24eec17?) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:133 +0x6b7 k8s.io/kubernetes/test/e2e.TestE2E(0x24602d9?) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc000187d40, 0x72ecb90) /usr/local/go/src/testing/testing.go:1446 +0x10b created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1493 +0x35f Jan 30 22:54:05.475: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 �[1mSTEP�[0m: Collecting events from namespace "services-5017". �[1mSTEP�[0m: Found 26 events. Jan 30 22:54:05.791: INFO: At 2023-01-30 22:49:01 +0000 UTC - event for externalsvc: {replication-controller } SuccessfulCreate: Created pod: externalsvc-c5qz7 Jan 30 22:54:05.791: INFO: At 2023-01-30 22:49:02 +0000 UTC - event for externalsvc: {replication-controller } SuccessfulCreate: Created pod: externalsvc-qmzsv Jan 30 22:54:05.791: INFO: At 2023-01-30 22:49:02 +0000 UTC - event for externalsvc-c5qz7: {default-scheduler } Scheduled: Successfully assigned services-5017/externalsvc-c5qz7 to ip-172-20-63-7.sa-east-1.compute.internal Jan 30 22:54:05.791: INFO: At 2023-01-30 22:49:02 +0000 UTC - event for externalsvc-qmzsv: {default-scheduler } Scheduled: Successfully assigned services-5017/externalsvc-qmzsv to ip-172-20-46-143.sa-east-1.compute.internal Jan 30 22:54:05.791: INFO: At 2023-01-30 22:49:03 +0000 UTC - event for externalsvc-qmzsv: {kubelet ip-172-20-46-143.sa-east-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "6cd8444f61e970972d621ce71ebf23051e5a185059208821042c959880149a9e" network for pod "externalsvc-qmzsv": networkPlugin cni failed to set up pod "externalsvc-qmzsv_services-5017" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 30 22:54:05.791: INFO: At 2023-01-30 22:49:05 +0000 UTC - event for externalsvc-c5qz7: {kubelet ip-172-20-63-7.sa-east-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "f2108f86c4c01ab93f87c77fd315fad2bce0f35544cae4aba6a8ea3950122090" network for pod "externalsvc-c5qz7": networkPlugin cni failed to set up pod "externalsvc-c5qz7_services-5017" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 30 22:54:05.791: INFO: At 2023-01-30 22:49:05 +0000 UTC - event for externalsvc-qmzsv: {kubelet ip-172-20-46-143.sa-east-1.compute.internal} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 22:54:05.791: INFO: At 2023-01-30 22:49:06 +0000 UTC - event for externalsvc-qmzsv: {kubelet ip-172-20-46-143.sa-east-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "8f800483071781af01a6f3f618756d8b2fc4b899d5323d6d8b4b27a3192f9242" network for pod "externalsvc-qmzsv": networkPlugin cni failed to set up pod "externalsvc-qmzsv_services-5017" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 30 22:54:05.791: INFO: At 2023-01-30 22:49:07 +0000 UTC - event for externalsvc-qmzsv: {kubelet ip-172-20-46-143.sa-east-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "f732973942f82fea403c7e8f44738600f7cae35350b49402170b9ad19b6e824b" network for pod "externalsvc-qmzsv": networkPlugin cni failed to set up pod "externalsvc-qmzsv_services-5017" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 30 22:54:05.791: INFO: At 2023-01-30 22:49:09 +0000 UTC - event for externalsvc-c5qz7: {kubelet ip-172-20-63-7.sa-east-1.compute.internal} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 22:54:05.791: INFO: At 2023-01-30 22:49:09 +0000 UTC - event for externalsvc-qmzsv: {kubelet ip-172-20-46-143.sa-east-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "6544815c2011eb7fde64824aa4daa7904f03fbd455e70d02c6116a374764c510" network for pod "externalsvc-qmzsv": networkPlugin cni failed to set up pod "externalsvc-qmzsv_services-5017" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 30 22:54:05.791: INFO: At 2023-01-30 22:49:11 +0000 UTC - event for externalsvc-c5qz7: {kubelet ip-172-20-63-7.sa-east-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "c39b3ee09aff6374b726279654f8678dea8d71ca6bb5875879dcf92db83e630d" network for pod "externalsvc-c5qz7": networkPlugin cni failed to set up pod "externalsvc-c5qz7_services-5017" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 30 22:54:05.791: INFO: At 2023-01-30 22:49:12 +0000 UTC - event for externalsvc-qmzsv: {kubelet ip-172-20-46-143.sa-east-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "0d0b7f25387a379e1d22b57e2bc13fb24883f3b3357d5833236c0b7c0b4bc97d" network for pod "externalsvc-qmzsv": networkPlugin cni failed to set up pod "externalsvc-qmzsv_services-5017" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 30 22:54:05.791: INFO: At 2023-01-30 22:49:13 +0000 UTC - event for externalsvc-qmzsv: {kubelet ip-172-20-46-143.sa-east-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "a2e306d01b8175d5d3e01bd266830f48afacbc04f961a7af356a34f50fa99d14" network for pod "externalsvc-qmzsv": networkPlugin cni failed to set up pod "externalsvc-qmzsv_services-5017" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 30 22:54:05.791: INFO: At 2023-01-30 22:49:15 +0000 UTC - event for externalsvc-qmzsv: {kubelet ip-172-20-46-143.sa-east-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "80fe60de7b626f3a177938b9a00c40e461ed943889e221404d9c21d6fc19922c" network for pod "externalsvc-qmzsv": networkPlugin cni failed to set up pod "externalsvc-qmzsv_services-5017" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 30 22:54:05.791: INFO: At 2023-01-30 22:49:16 +0000 UTC - event for externalsvc-qmzsv: {kubelet ip-172-20-46-143.sa-east-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "6b6fd6cfa2bca3f2be77fe8a00223f4e742d0983d68340bfcd627ae57f7ec17a" network for pod "externalsvc-qmzsv": networkPlugin cni failed to set up pod "externalsvc-qmzsv_services-5017" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 30 22:54:05.791: INFO: At 2023-01-30 22:49:17 +0000 UTC - event for externalsvc-c5qz7: {kubelet ip-172-20-63-7.sa-east-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "0050435e6a213e1925b0bbfb5580f317a646ef4a7eeb0843bd5362c0b17ab63f" network for pod "externalsvc-c5qz7": networkPlugin cni failed to set up pod "externalsvc-c5qz7_services-5017" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 30 22:54:05.791: INFO: At 2023-01-30 22:49:18 +0000 UTC - event for externalsvc-qmzsv: {kubelet ip-172-20-46-143.sa-east-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "0629ba7e39e1180d49c303465e3a89c4a5c37219c7e910db9c6d6891fb30ef47" network for pod "externalsvc-qmzsv": networkPlugin cni failed to set up pod "externalsvc-qmzsv_services-5017" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 30 22:54:05.791: INFO: At 2023-01-30 22:49:20 +0000 UTC - event for externalsvc-c5qz7: {kubelet ip-172-20-63-7.sa-east-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "555376ac9e0c1e8a56c428669da6acb5ccc03444ec7545205254026c58e999f7" network for pod "externalsvc-c5qz7": networkPlugin cni failed to set up pod "externalsvc-c5qz7_services-5017" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 30 22:54:05.791: INFO: At 2023-01-30 22:49:21 +0000 UTC - event for externalsvc-qmzsv: {kubelet ip-172-20-46-143.sa-east-1.compute.internal} FailedCreatePodSandBox: (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "520fccb5db2d72fd37ba27f2d069869c6e0c3ab2450d6cbe55df8caf5ab2a076" network for pod "externalsvc-qmzsv": networkPlugin cni failed to set up pod "externalsvc-qmzsv_services-5017" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 30 22:54:05.791: INFO: At 2023-01-30 22:49:25 +0000 UTC - event for externalsvc-c5qz7: {kubelet ip-172-20-63-7.sa-east-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "8327413ada867b591ef2acd664409e9488ce77a21cb010ada0ae20a65260b864" network for pod "externalsvc-c5qz7": networkPlugin cni failed to set up pod "externalsvc-c5qz7_services-5017" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 30 22:54:05.791: INFO: At 2023-01-30 22:49:31 +0000 UTC - event for externalsvc-c5qz7: {kubelet ip-172-20-63-7.sa-east-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "53655c6289af26ef4e3c91a5a3eb551de74dbf3d84406764e8f85f75d01505e8" network for pod "externalsvc-c5qz7": networkPlugin cni failed to set up pod "externalsvc-c5qz7_services-5017" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 30 22:54:05.791: INFO: At 2023-01-30 22:49:38 +0000 UTC - event for externalsvc-c5qz7: {kubelet ip-172-20-63-7.sa-east-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "e261675f48bd519998af6938a29354af0cabeb9ff1fe37a24358d48617853236" network for pod "externalsvc-c5qz7": networkPlugin cni failed to set up pod "externalsvc-c5qz7_services-5017" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 30 22:54:05.791: INFO: At 2023-01-30 22:49:45 +0000 UTC - event for externalsvc-c5qz7: {kubelet ip-172-20-63-7.sa-east-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "9acf30f1fa8d4356e2cd9d2a54a1a04062e4c35d374413201d63ef6f34e6197f" network for pod "externalsvc-c5qz7": networkPlugin cni failed to set up pod "externalsvc-c5qz7_services-5017" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 30 22:54:05.791: INFO: At 2023-01-30 22:49:52 +0000 UTC - event for externalsvc-c5qz7: {kubelet ip-172-20-63-7.sa-east-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "236d4ca21447639b8b1e6eea9e9c0bf2f72cfc049e1ef12f452c996b0cdb3c6c" network for pod "externalsvc-c5qz7": networkPlugin cni failed to set up pod "externalsvc-c5qz7_services-5017" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 30 22:54:05.791: INFO: At 2023-01-30 22:49:59 +0000 UTC - event for externalsvc-c5qz7: {kubelet ip-172-20-63-7.sa-east-1.compute.internal} FailedCreatePodSandBox: (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "0900eeda05e908dc110bccee68ce8959c7d6e127acdbd7a70de7d14c81c97a08" network for pod "externalsvc-c5qz7": networkPlugin cni failed to set up pod "externalsvc-c5qz7_services-5017" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 30 22:54:05.933: INFO: POD NODE PHASE GRACE CONDITIONS Jan 30 22:54:05.933: INFO: externalsvc-c5qz7 ip-172-20-63-7.sa-east-1.compute.internal Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 22:49:02 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 22:49:02 +0000 UTC ContainersNotReady containers with unready status: [externalsvc]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 22:49:02 +0000 UTC ContainersNotReady containers with unready status: [externalsvc]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 22:49:01 +0000 UTC }] Jan 30 22:54:05.933: INFO: externalsvc-qmzsv ip-172-20-46-143.sa-east-1.compute.internal Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 22:49:02 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 22:49:02 +0000 UTC ContainersNotReady containers with unready status: [externalsvc]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 22:49:02 +0000 UTC ContainersNotReady containers with unready status: [externalsvc]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 22:49:02 +0000 UTC }] Jan 30 22:54:05.933: INFO: Jan 30 22:54:06.085: INFO: Unable to fetch services-5017/externalsvc-c5qz7/externalsvc logs: the server rejected our request for an unknown reason (get pods externalsvc-c5qz7) Jan 30 22:54:06.230: INFO: Unable to fetch services-5017/externalsvc-qmzsv/externalsvc logs: the server rejected our request for an unknown reason (get pods externalsvc-qmzsv) Jan 30 22:54:06.373: INFO: Logging node info for node ip-172-20-37-244.sa-east-1.compute.internal Jan 30 22:54:06.515: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-37-244.sa-east-1.compute.internal 1be0c21f-5cd5-49c3-937b-dcb7d30e890a 6805 0 2023-01-30 22:39:08 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:sa-east-1 failure-domain.beta.kubernetes.io/zone:sa-east-1a kubelet_cleanup:true kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-37-244.sa-east-1.compute.internal kubernetes.io/os:linux kubernetes.io/role:node node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:sa-east-1a topology.hostpath.csi/node:ip-172-20-37-244.sa-east-1.compute.internal topology.kubernetes.io/region:sa-east-1 topology.kubernetes.io/zone:sa-east-1a] map[csi.volume.kubernetes.io/nodeid:{"ebs.csi.aws.com":"i-02de6750f6f07da4c"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kops-controller Update v1 2023-01-30 22:39:08 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/node":{}}}} } {kube-controller-manager Update v1 2023-01-30 22:39:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.4.0/24\"":{}}}} } {kubelet Update v1 2023-01-30 22:39:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {e2e.test Update v1 2023-01-30 22:43:41 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubelet_cleanup":{}}}} } {kubelet Update v1 2023-01-30 22:49:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.4.0/24,DoNotUseExternalID:,ProviderID:aws:///sa-east-1a/i-02de6750f6f07da4c,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{49753808896 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4051644416 0} {<nil>} 3956684Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{44778427933 0} {<nil>} 44778427933 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3946786816 0} {<nil>} 3854284Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-30 22:49:11 +0000 UTC,LastTransitionTime:2023-01-30 22:39:08 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-30 22:49:11 +0000 UTC,LastTransitionTime:2023-01-30 22:39:08 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-30 22:49:11 +0000 UTC,LastTransitionTime:2023-01-30 22:39:08 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-30 22:49:11 +0000 UTC,LastTransitionTime:2023-01-30 22:39:28 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.37.244,},NodeAddress{Type:ExternalIP,Address:54.232.162.137,},NodeAddress{Type:Hostname,Address:ip-172-20-37-244.sa-east-1.compute.internal,},NodeAddress{Type:InternalDNS,Address:ip-172-20-37-244.sa-east-1.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-54-232-162-137.sa-east-1.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec2350fb0335a8c0068ce4bddeab7362,SystemUUID:ec2350fb-0335-a8c0-068c-e4bddeab7362,BootID:80522224-50f0-4d12-bc36-a8ad10d0e9d2,KernelVersion:5.15.0-1028-aws,OSImage:Ubuntu 22.04.1 LTS,ContainerRuntimeVersion:docker://20.10.17,KubeletVersion:v1.23.16,KubeProxyVersion:v1.23.16,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5 quay.io/cilium/cilium:v1.12.5],SizeBytes:456569528,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e k8s.gcr.io/e2e-test-images/agnhost:2.39],SizeBytes:126872991,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.23.16 registry.k8s.io/kube-proxy-amd64:v1.23.16],SizeBytes:110832791,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:f0c5de192d832e7c1daa6580d4a62e8fa6fc8eabc0917ae4cb7ed4d15e95b59e registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.1],SizeBytes:94471559,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:4e74c0492bceddc598de1c90cc5bc14dcda94cb49fa9c5bad9d117c4834b5e08 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.1],SizeBytes:56379269,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:36c31f7e1f433c9634d24f876353e8646246d81a03c4e351202c2644daff1620 k8s.gcr.io/sig-storage/csi-resizer:v1.2.0],SizeBytes:53956308,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-snapshotter@sha256:ed98431376c9e944e19a465fe8ea944806714dd95416a0821096c78d66b579bd k8s.gcr.io/sig-storage/csi-snapshotter:v4.1.1],SizeBytes:53531990,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:60ab9b3e6a030d3038c87c0d6bca2930f58d1d72823e6a4af09767dc83b696a2 k8s.gcr.io/sig-storage/csi-attacher:v3.2.1],SizeBytes:53499570,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:50249337,},ContainerImage{Names:[k8s.gcr.io/sig-storage/hostpathplugin@sha256:232fe80174d60d520d36043103853a1d7ab4b7f3782cf43e45034f04ccda58ce k8s.gcr.io/sig-storage/hostpathplugin:v1.7.1],SizeBytes:29977921,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:19582112,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:2dee3fe5fe861bb66c3a4ac51114f3447a4cd35870e0f2e2b558c7a400d89589 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0],SizeBytes:18699436,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:17611980,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:529be2c9770add0cdd0c989115222ea9fc1be430c11095eb9f6dafcf98a36e2b k8s.gcr.io/sig-storage/livenessprobe:v2.4.0],SizeBytes:17175488,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:1154361,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6 registry.k8s.io/pause:3.6],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 30 22:54:06.516: INFO: Logging kubelet events for node ip-172-20-37-244.sa-east-1.compute.internal Jan 30 22:54:06.679: INFO: Logging pods the kubelet thinks is on node ip-172-20-37-244.sa-east-1.compute.internal Jan 30 22:54:06.831: INFO: coredns-867df8f45c-q48mf started at 2023-01-30 22:39:48 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:06.831: INFO: Container coredns ready: true, restart count 0 Jan 30 22:54:06.831: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-5jrwl started at 2023-01-30 22:43:42 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:06.831: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:54:06.831: INFO: ss2-1 started at 2023-01-30 22:43:37 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:06.831: INFO: Container webserver ready: true, restart count 0 Jan 30 22:54:06.831: INFO: ebs-csi-node-wwnfq started at 2023-01-30 22:39:09 +0000 UTC (0+3 container statuses recorded) Jan 30 22:54:06.831: INFO: Container ebs-plugin ready: true, restart count 0 Jan 30 22:54:06.831: INFO: Container liveness-probe ready: true, restart count 0 Jan 30 22:54:06.831: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 30 22:54:06.831: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-5qz9q started at 2023-01-30 22:43:42 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:06.831: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:54:06.831: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-jxx2t started at 2023-01-30 22:43:42 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:06.831: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:54:06.831: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-jvpvp started at 2023-01-30 22:43:42 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:06.831: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:54:06.831: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-xbf2p started at 2023-01-30 22:43:43 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:06.831: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:54:06.831: INFO: rs-hh4qw started at 2023-01-30 22:49:21 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:06.831: INFO: Container donothing ready: false, restart count 0 Jan 30 22:54:06.831: INFO: cilium-2kmmh started at 2023-01-30 22:39:09 +0000 UTC (1+1 container statuses recorded) Jan 30 22:54:06.831: INFO: Init container clean-cilium-state ready: true, restart count 0 Jan 30 22:54:06.831: INFO: Container cilium-agent ready: true, restart count 0 Jan 30 22:54:06.831: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-g7dxk started at 2023-01-30 22:43:42 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:06.831: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:54:06.831: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-x4sjr started at 2023-01-30 22:43:43 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:06.831: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:54:06.831: INFO: hostexec-ip-172-20-37-244.sa-east-1.compute.internal-5pprs started at 2023-01-30 22:50:34 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:06.831: INFO: Container agnhost-container ready: true, restart count 0 Jan 30 22:54:06.831: INFO: pod-subpath-test-preprovisionedpv-zc5t started at 2023-01-30 22:50:47 +0000 UTC (1+2 container statuses recorded) Jan 30 22:54:06.831: INFO: Init container test-init-subpath-preprovisionedpv-zc5t ready: false, restart count 0 Jan 30 22:54:06.831: INFO: Container test-container-subpath-preprovisionedpv-zc5t ready: false, restart count 0 Jan 30 22:54:06.831: INFO: Container test-container-volume-preprovisionedpv-zc5t ready: false, restart count 0 Jan 30 22:54:06.831: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-7jbrf started at 2023-01-30 22:43:43 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:06.831: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:54:06.831: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-vpcj2 started at 2023-01-30 22:43:43 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:06.831: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:54:06.831: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-tvmsg started at 2023-01-30 22:43:42 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:06.831: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:54:07.284: INFO: Latency metrics for node ip-172-20-37-244.sa-east-1.compute.internal Jan 30 22:54:07.284: INFO: Logging node info for node ip-172-20-46-143.sa-east-1.compute.internal Jan 30 22:54:07.426: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-46-143.sa-east-1.compute.internal 4ac0f2fd-a06b-4650-9c4a-c2964727bf42 6657 0 2023-01-30 22:39:07 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:sa-east-1 failure-domain.beta.kubernetes.io/zone:sa-east-1a kubelet_cleanup:true kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-46-143.sa-east-1.compute.internal kubernetes.io/os:linux kubernetes.io/role:node node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:sa-east-1a topology.kubernetes.io/region:sa-east-1 topology.kubernetes.io/zone:sa-east-1a] map[csi.volume.kubernetes.io/nodeid:{"ebs.csi.aws.com":"i-0549a01609c77b117"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kops-controller Update v1 2023-01-30 22:39:07 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/node":{}}}} } {kube-controller-manager Update v1 2023-01-30 22:39:07 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.3.0/24\"":{}}}} } {kubelet Update v1 2023-01-30 22:39:07 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {e2e.test Update v1 2023-01-30 22:43:41 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubelet_cleanup":{}}}} } {kube-controller-manager Update v1 2023-01-30 22:49:03 +0000 UTC FieldsV1 {"f:status":{"f:volumesAttached":{}}} status} {kubelet Update v1 2023-01-30 22:49:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.3.0/24,DoNotUseExternalID:,ProviderID:aws:///sa-east-1a/i-0549a01609c77b117,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{49753808896 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4051652608 0} {<nil>} 3956692Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{44778427933 0} {<nil>} 44778427933 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3946795008 0} {<nil>} 3854292Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-30 22:49:11 +0000 UTC,LastTransitionTime:2023-01-30 22:39:07 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-30 22:49:11 +0000 UTC,LastTransitionTime:2023-01-30 22:39:07 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-30 22:49:11 +0000 UTC,LastTransitionTime:2023-01-30 22:39:07 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-30 22:49:11 +0000 UTC,LastTransitionTime:2023-01-30 22:39:28 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.46.143,},NodeAddress{Type:ExternalIP,Address:18.230.23.25,},NodeAddress{Type:Hostname,Address:ip-172-20-46-143.sa-east-1.compute.internal,},NodeAddress{Type:InternalDNS,Address:ip-172-20-46-143.sa-east-1.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-18-230-23-25.sa-east-1.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec25bac6007d23dab6609e76a6663500,SystemUUID:ec25bac6-007d-23da-b660-9e76a6663500,BootID:cd72b157-4d78-4df9-997f-bab559376690,KernelVersion:5.15.0-1028-aws,OSImage:Ubuntu 22.04.1 LTS,ContainerRuntimeVersion:docker://20.10.17,KubeletVersion:v1.23.16,KubeProxyVersion:v1.23.16,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5 quay.io/cilium/cilium:v1.12.5],SizeBytes:456569528,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:20f25f275d46aa728f7615a1ccc19c78b2ed89435bf943a44b339f70f45508e6 k8s.gcr.io/e2e-test-images/httpd:2.4.39-2],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e k8s.gcr.io/e2e-test-images/agnhost:2.39],SizeBytes:126872991,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.23.16 registry.k8s.io/kube-proxy-amd64:v1.23.16],SizeBytes:110832791,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:f0c5de192d832e7c1daa6580d4a62e8fa6fc8eabc0917ae4cb7ed4d15e95b59e registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.1],SizeBytes:94471559,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:19582112,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:17611980,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 k8s.gcr.io/e2e-test-images/nginx:1.14-2],SizeBytes:16032814,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:1154361,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6 registry.k8s.io/pause:3.6],SizeBytes:682696,},},VolumesInUse:[kubernetes.io/csi/ebs.csi.aws.com^vol-02a6ee60ede372824],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/ebs.csi.aws.com^vol-02a6ee60ede372824,DevicePath:,},},Config:nil,},} Jan 30 22:54:07.427: INFO: Logging kubelet events for node ip-172-20-46-143.sa-east-1.compute.internal Jan 30 22:54:07.571: INFO: Logging pods the kubelet thinks is on node ip-172-20-46-143.sa-east-1.compute.internal Jan 30 22:54:07.725: INFO: ss2-0 started at 2023-01-30 22:46:20 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:07.725: INFO: Container webserver ready: false, restart count 0 Jan 30 22:54:07.725: INFO: hostexec-ip-172-20-46-143.sa-east-1.compute.internal-s8w47 started at 2023-01-30 22:48:52 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:07.725: INFO: Container agnhost-container ready: true, restart count 0 Jan 30 22:54:07.725: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-2sw7v started at 2023-01-30 22:43:43 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:07.725: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:54:07.725: INFO: rs-8d5pg started at 2023-01-30 22:49:21 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:07.725: INFO: Container donothing ready: false, restart count 0 Jan 30 22:54:07.725: INFO: pod-secrets-025721a8-1f1a-425c-b117-8841c9b333cd started at 2023-01-30 22:49:58 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:07.725: INFO: Container secret-volume-test ready: false, restart count 0 Jan 30 22:54:07.725: INFO: pod-subpath-test-preprovisionedpv-z9fs started at 2023-01-30 22:50:49 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:07.725: INFO: Container test-container-subpath-preprovisionedpv-z9fs ready: false, restart count 0 Jan 30 22:54:07.725: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-q6pfk started at 2023-01-30 22:43:42 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:07.725: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:54:07.725: INFO: cilium-m624g started at 2023-01-30 22:39:08 +0000 UTC (1+1 container statuses recorded) Jan 30 22:54:07.725: INFO: Init container clean-cilium-state ready: true, restart count 0 Jan 30 22:54:07.725: INFO: Container cilium-agent ready: true, restart count 0 Jan 30 22:54:07.725: INFO: externalsvc-qmzsv started at 2023-01-30 22:49:02 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:07.725: INFO: Container externalsvc ready: false, restart count 0 Jan 30 22:54:07.726: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-bxpzz started at 2023-01-30 22:43:42 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:07.726: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:54:07.726: INFO: ebs-csi-node-qjvfh started at 2023-01-30 22:39:08 +0000 UTC (0+3 container statuses recorded) Jan 30 22:54:07.726: INFO: Container ebs-plugin ready: true, restart count 0 Jan 30 22:54:07.726: INFO: Container liveness-probe ready: true, restart count 0 Jan 30 22:54:07.726: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 30 22:54:07.726: INFO: busybox-host-aliases4c49ce25-2bdd-4be9-8511-41e5a85d0929 started at 2023-01-30 22:50:17 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:07.726: INFO: Container busybox-host-aliases4c49ce25-2bdd-4be9-8511-41e5a85d0929 ready: false, restart count 0 Jan 30 22:54:07.726: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-pt29g started at 2023-01-30 22:43:42 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:07.726: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:54:07.726: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-2sx9d started at 2023-01-30 22:43:43 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:07.726: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:54:07.726: INFO: pod-4f1d4caa-3b55-4d16-b486-82f59f49f567 started at 2023-01-30 22:50:54 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:07.726: INFO: Container test-container ready: false, restart count 0 Jan 30 22:54:07.726: INFO: pod-cf5ae510-5ee5-443b-b0c3-086ca0deda69 started at 2023-01-30 22:49:31 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:07.726: INFO: Container write-pod ready: false, restart count 0 Jan 30 22:54:07.726: INFO: ss2-2 started at 2023-01-30 22:44:16 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:07.726: INFO: Container webserver ready: true, restart count 0 Jan 30 22:54:07.726: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-2g6gh started at 2023-01-30 22:43:42 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:07.726: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:54:07.726: INFO: hostexec-ip-172-20-46-143.sa-east-1.compute.internal-g66zc started at 2023-01-30 22:49:26 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:07.726: INFO: Container agnhost-container ready: true, restart count 0 Jan 30 22:54:07.726: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-8w4h2 started at 2023-01-30 22:43:42 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:07.726: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:54:07.726: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-4cqsj started at 2023-01-30 22:43:43 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:07.726: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:54:07.726: INFO: adopt-release-qqtpc started at 2023-01-30 22:49:55 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:07.726: INFO: Container c ready: true, restart count 0 Jan 30 22:54:07.726: INFO: adopt-release-rpjrs started at 2023-01-30 22:49:55 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:07.726: INFO: Container c ready: false, restart count 0 Jan 30 22:54:07.726: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-dw5bz started at 2023-01-30 22:43:42 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:07.726: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:54:07.726: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-jh598 started at 2023-01-30 22:43:42 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:07.726: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:54:08.517: INFO: Latency metrics for node ip-172-20-46-143.sa-east-1.compute.internal Jan 30 22:54:08.517: INFO: Logging node info for node ip-172-20-56-33.sa-east-1.compute.internal Jan 30 22:54:08.661: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-56-33.sa-east-1.compute.internal 954986f9-8a0c-45d3-a91c-b10fd929b91d 6773 0 2023-01-30 22:39:06 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:sa-east-1 failure-domain.beta.kubernetes.io/zone:sa-east-1a kubelet_cleanup:true kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-56-33.sa-east-1.compute.internal kubernetes.io/os:linux kubernetes.io/role:node node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:sa-east-1a topology.kubernetes.io/region:sa-east-1 topology.kubernetes.io/zone:sa-east-1a] map[csi.volume.kubernetes.io/nodeid:{"ebs.csi.aws.com":"i-09e0b8ffb97d8ede2"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kops-controller Update v1 2023-01-30 22:39:06 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/node":{}}}} } {kubelet Update v1 2023-01-30 22:39:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-30 22:39:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.1.0/24\"":{}}}} } {e2e.test Update v1 2023-01-30 22:43:42 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubelet_cleanup":{}}}} } {kubelet Update v1 2023-01-30 22:44:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.1.0/24,DoNotUseExternalID:,ProviderID:aws:///sa-east-1a/i-09e0b8ffb97d8ede2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{49753808896 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4051644416 0} {<nil>} 3956684Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{44778427933 0} {<nil>} 44778427933 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3946786816 0} {<nil>} 3854284Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-30 22:49:18 +0000 UTC,LastTransitionTime:2023-01-30 22:39:06 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-30 22:49:18 +0000 UTC,LastTransitionTime:2023-01-30 22:39:06 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-30 22:49:18 +0000 UTC,LastTransitionTime:2023-01-30 22:39:06 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-30 22:49:18 +0000 UTC,LastTransitionTime:2023-01-30 22:39:26 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.56.33,},NodeAddress{Type:ExternalIP,Address:54.233.226.185,},NodeAddress{Type:Hostname,Address:ip-172-20-56-33.sa-east-1.compute.internal,},NodeAddress{Type:InternalDNS,Address:ip-172-20-56-33.sa-east-1.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-54-233-226-185.sa-east-1.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec2d7bffa4e33f064a7a3db7aac73580,SystemUUID:ec2d7bff-a4e3-3f06-4a7a-3db7aac73580,BootID:749c0ee0-ccbf-48a5-9702-baf2673813b3,KernelVersion:5.15.0-1028-aws,OSImage:Ubuntu 22.04.1 LTS,ContainerRuntimeVersion:docker://20.10.17,KubeletVersion:v1.23.16,KubeProxyVersion:v1.23.16,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5 quay.io/cilium/cilium:v1.12.5],SizeBytes:456569528,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e k8s.gcr.io/e2e-test-images/agnhost:2.39],SizeBytes:126872991,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.23.16 registry.k8s.io/kube-proxy-amd64:v1.23.16],SizeBytes:110832791,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:f0c5de192d832e7c1daa6580d4a62e8fa6fc8eabc0917ae4cb7ed4d15e95b59e registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.1],SizeBytes:94471559,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:50249337,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:68d396900aeaa072c1f27289485fdac29834045a6f3ffe369bf389d830ef572d registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.6],SizeBytes:47910609,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:19582112,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:17611980,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 k8s.gcr.io/e2e-test-images/nginx:1.14-2],SizeBytes:16032814,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:1154361,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6 registry.k8s.io/pause:3.6],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 30 22:54:08.662: INFO: Logging kubelet events for node ip-172-20-56-33.sa-east-1.compute.internal Jan 30 22:54:08.807: INFO: Logging pods the kubelet thinks is on node ip-172-20-56-33.sa-east-1.compute.internal Jan 30 22:54:08.961: INFO: ebs-csi-node-846kf started at 2023-01-30 22:39:07 +0000 UTC (0+3 container statuses recorded) Jan 30 22:54:08.961: INFO: Container ebs-plugin ready: true, restart count 0 Jan 30 22:54:08.961: INFO: Container liveness-probe ready: true, restart count 0 Jan 30 22:54:08.961: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 30 22:54:08.961: INFO: coredns-867df8f45c-txv2h started at 2023-01-30 22:39:26 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:08.961: INFO: Container coredns ready: true, restart count 0 Jan 30 22:54:08.961: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-ctp24 started at 2023-01-30 22:43:42 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:08.961: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:54:08.961: INFO: exec-volume-test-preprovisionedpv-4xz9 started at 2023-01-30 22:51:15 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:08.961: INFO: Container exec-container-preprovisionedpv-4xz9 ready: false, restart count 0 Jan 30 22:54:08.961: INFO: hostexec-ip-172-20-56-33.sa-east-1.compute.internal-tnngb started at 2023-01-30 22:49:40 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:08.961: INFO: Container agnhost-container ready: true, restart count 0 Jan 30 22:54:08.961: INFO: coredns-autoscaler-557ccb4c66-vs6br started at 2023-01-30 22:39:26 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:08.961: INFO: Container autoscaler ready: true, restart count 0 Jan 30 22:54:08.961: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-rqjpx started at 2023-01-30 22:43:42 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:08.961: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:54:08.961: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-fmwp2 started at 2023-01-30 22:43:42 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:08.961: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:54:08.961: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-g95mq started at 2023-01-30 22:43:42 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:08.961: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:54:08.961: INFO: ss2-0 started at 2023-01-30 22:42:49 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:08.961: INFO: Container webserver ready: true, restart count 0 Jan 30 22:54:08.961: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-z46zz started at 2023-01-30 22:43:42 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:08.961: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: false, restart count 0 Jan 30 22:54:08.961: INFO: fail-once-non-local-ksmfx started at 2023-01-30 22:43:53 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:08.961: INFO: Container c ready: false, restart count 0 Jan 30 22:54:08.961: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-k9mvg started at 2023-01-30 22:43:43 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:08.961: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:54:08.961: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-rj6w6 started at 2023-01-30 22:43:43 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:08.961: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:54:08.961: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-hx4t7 started at 2023-01-30 22:43:42 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:08.961: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:54:08.961: INFO: csi-hostpathplugin-0 started at 2023-01-30 22:49:23 +0000 UTC (0+7 container statuses recorded) Jan 30 22:54:08.961: INFO: Container csi-attacher ready: false, restart count 0 Jan 30 22:54:08.961: INFO: Container csi-provisioner ready: false, restart count 0 Jan 30 22:54:08.961: INFO: Container csi-resizer ready: false, restart count 0 Jan 30 22:54:08.961: INFO: Container csi-snapshotter ready: false, restart count 0 Jan 30 22:54:08.961: INFO: Container hostpath ready: false, restart count 0 Jan 30 22:54:08.961: INFO: Container liveness-probe ready: false, restart count 0 Jan 30 22:54:08.961: INFO: Container node-driver-registrar ready: false, restart count 0 Jan 30 22:54:08.961: INFO: csi-hostpathplugin-0 started at 2023-01-30 22:45:43 +0000 UTC (0+7 container statuses recorded) Jan 30 22:54:08.962: INFO: Container csi-attacher ready: false, restart count 0 Jan 30 22:54:08.962: INFO: Container csi-provisioner ready: false, restart count 0 Jan 30 22:54:08.962: INFO: Container csi-resizer ready: false, restart count 0 Jan 30 22:54:08.962: INFO: Container csi-snapshotter ready: false, restart count 0 Jan 30 22:54:08.962: INFO: Container hostpath ready: false, restart count 0 Jan 30 22:54:08.962: INFO: Container liveness-probe ready: false, restart count 0 Jan 30 22:54:08.962: INFO: Container node-driver-registrar ready: false, restart count 0 Jan 30 22:54:08.962: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-kl9wl started at 2023-01-30 22:43:43 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:08.962: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:54:08.962: INFO: hostexec-ip-172-20-56-33.sa-east-1.compute.internal-dgjvl started at 2023-01-30 22:51:01 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:08.962: INFO: Container agnhost-container ready: true, restart count 0 Jan 30 22:54:08.962: INFO: cilium-rrh22 started at 2023-01-30 22:39:07 +0000 UTC (1+1 container statuses recorded) Jan 30 22:54:08.962: INFO: Init container clean-cilium-state ready: true, restart count 0 Jan 30 22:54:08.962: INFO: Container cilium-agent ready: true, restart count 0 Jan 30 22:54:08.962: INFO: fail-once-non-local-7gtm7 started at 2023-01-30 22:44:15 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:08.962: INFO: Container c ready: false, restart count 0 Jan 30 22:54:08.962: INFO: local-injector started at 2023-01-30 22:50:01 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:08.962: INFO: Container local-injector ready: false, restart count 0 Jan 30 22:54:08.962: INFO: fail-once-non-local-nvn9l started at 2023-01-30 22:43:53 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:08.962: INFO: Container c ready: false, restart count 0 Jan 30 22:54:08.962: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-cxdvn started at 2023-01-30 22:43:42 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:08.962: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:54:09.727: INFO: Latency metrics for node ip-172-20-56-33.sa-east-1.compute.internal Jan 30 22:54:09.727: INFO: Logging node info for node ip-172-20-63-44.sa-east-1.compute.internal Jan 30 22:54:09.869: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-63-44.sa-east-1.compute.internal f7fcefff-e13d-4383-8796-cdc02ac9be26 7035 0 2023-01-30 22:37:10 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:c5.large beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:sa-east-1 failure-domain.beta.kubernetes.io/zone:sa-east-1a kops.k8s.io/kops-controller-pki: kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-63-44.sa-east-1.compute.internal kubernetes.io/os:linux kubernetes.io/role:master node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers: node.kubernetes.io/instance-type:c5.large topology.ebs.csi.aws.com/zone:sa-east-1a topology.kubernetes.io/region:sa-east-1 topology.kubernetes.io/zone:sa-east-1a] map[csi.volume.kubernetes.io/nodeid:{"ebs.csi.aws.com":"i-020b2e4354e67a776"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-30 22:37:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {protokube Update v1 2023-01-30 22:37:31 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kops.k8s.io/kops-controller-pki":{},"f:node-role.kubernetes.io/control-plane":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}} } {kops-controller Update v1 2023-01-30 22:37:33 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/master":{}}}} } {kube-controller-manager Update v1 2023-01-30 22:37:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.0.0/24\"":{}},"f:taints":{}}} } {kubelet Update v1 2023-01-30 22:38:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.0.0/24,DoNotUseExternalID:,ProviderID:aws:///sa-east-1a/i-020b2e4354e67a776,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[100.96.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{49753808896 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3862913024 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{44778427933 0} {<nil>} 44778427933 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3758055424 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-30 22:49:37 +0000 UTC,LastTransitionTime:2023-01-30 22:37:10 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-30 22:49:37 +0000 UTC,LastTransitionTime:2023-01-30 22:37:10 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-30 22:49:37 +0000 UTC,LastTransitionTime:2023-01-30 22:37:10 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-30 22:49:37 +0000 UTC,LastTransitionTime:2023-01-30 22:37:51 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.63.44,},NodeAddress{Type:ExternalIP,Address:18.230.69.200,},NodeAddress{Type:Hostname,Address:ip-172-20-63-44.sa-east-1.compute.internal,},NodeAddress{Type:InternalDNS,Address:ip-172-20-63-44.sa-east-1.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-18-230-69-200.sa-east-1.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec2866a629da92bef6391329a4d3d367,SystemUUID:ec2866a6-29da-92be-f639-1329a4d3d367,BootID:a943fe41-4bc4-4772-98e1-0ba5a25bcb7f,KernelVersion:5.15.0-1028-aws,OSImage:Ubuntu 22.04.1 LTS,ContainerRuntimeVersion:docker://20.10.17,KubeletVersion:v1.23.16,KubeProxyVersion:v1.23.16,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/etcdadm/etcd-manager@sha256:66a453db625abb268f4b3bbefc5a34a171d81e6e8796cecca54cfd71775c77c4 registry.k8s.io/etcdadm/etcd-manager:v3.0.20221209],SizeBytes:662209237,},ContainerImage{Names:[quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5 quay.io/cilium/cilium:v1.12.5],SizeBytes:456569528,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver-amd64:v1.23.16 registry.k8s.io/kube-apiserver-amd64:v1.23.16],SizeBytes:129999849,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager-amd64:v1.23.16 registry.k8s.io/kube-controller-manager-amd64:v1.23.16],SizeBytes:119940367,},ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.23.16 registry.k8s.io/kube-proxy-amd64:v1.23.16],SizeBytes:110832791,},ContainerImage{Names:[quay.io/cilium/operator@sha256:a6d24a006a6b92967ac90786b49bc1ac26e5477cf028cd1186efcfc2466484db quay.io/cilium/operator:v1.12.5],SizeBytes:110016956,},ContainerImage{Names:[registry.k8s.io/kops/kops-controller:1.26.0-beta.2],SizeBytes:106139107,},ContainerImage{Names:[registry.k8s.io/kops/dns-controller:1.26.0-beta.2],SizeBytes:102637092,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:f0c5de192d832e7c1daa6580d4a62e8fa6fc8eabc0917ae4cb7ed4d15e95b59e registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.1],SizeBytes:94471559,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:122bfb8c1edabb3c0edd63f06523e6940d958d19b3957dc7b1d6f81e9f1f6119 registry.k8s.io/sig-storage/csi-provisioner:v3.1.0],SizeBytes:57736070,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:9ebbf9f023e7b41ccee3d52afe39a89e3ddacdbb69269d583abfc25847cfd9e4 registry.k8s.io/sig-storage/csi-resizer:v1.4.0],SizeBytes:55464778,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:8b9c313c05f54fb04f8d430896f5f5904b6cb157df261501b29adc04d2b2dc7b registry.k8s.io/sig-storage/csi-attacher:v3.4.0],SizeBytes:54848595,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler-amd64:v1.23.16 registry.k8s.io/kube-scheduler-amd64:v1.23.16],SizeBytes:51852546,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:19582112,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:17611980,},ContainerImage{Names:[registry.k8s.io/kops/kube-apiserver-healthcheck:1.26.0-beta.2],SizeBytes:8786911,},ContainerImage{Names:[registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause:3.6],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 30 22:54:09.870: INFO: Logging kubelet events for node ip-172-20-63-44.sa-east-1.compute.internal Jan 30 22:54:10.015: INFO: Logging pods the kubelet thinks is on node ip-172-20-63-44.sa-east-1.compute.internal Jan 30 22:54:10.165: INFO: kube-apiserver-ip-172-20-63-44.sa-east-1.compute.internal started at 2023-01-30 22:36:20 +0000 UTC (0+2 container statuses recorded) Jan 30 22:54:10.165: INFO: Container healthcheck ready: true, restart count 0 Jan 30 22:54:10.165: INFO: Container kube-apiserver ready: true, restart count 1 Jan 30 22:54:10.165: INFO: kube-controller-manager-ip-172-20-63-44.sa-east-1.compute.internal started at 2023-01-30 22:36:20 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:10.165: INFO: Container kube-controller-manager ready: true, restart count 2 Jan 30 22:54:10.165: INFO: kops-controller-mrlzz started at 2023-01-30 22:37:31 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:10.165: INFO: Container kops-controller ready: true, restart count 0 Jan 30 22:54:10.165: INFO: dns-controller-58d7bbb845-vwkl6 started at 2023-01-30 22:37:32 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:10.165: INFO: Container dns-controller ready: true, restart count 0 Jan 30 22:54:10.165: INFO: cilium-operator-c7bfc9f44-bhw9j started at 2023-01-30 22:37:32 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:10.165: INFO: Container cilium-operator ready: true, restart count 0 Jan 30 22:54:10.165: INFO: ebs-csi-controller-6dbc9bb9b4-zt6h6 started at 2023-01-30 22:37:32 +0000 UTC (0+5 container statuses recorded) Jan 30 22:54:10.165: INFO: Container csi-attacher ready: true, restart count 0 Jan 30 22:54:10.165: INFO: Container csi-provisioner ready: true, restart count 0 Jan 30 22:54:10.165: INFO: Container csi-resizer ready: true, restart count 0 Jan 30 22:54:10.165: INFO: Container ebs-plugin ready: true, restart count 0 Jan 30 22:54:10.165: INFO: Container liveness-probe ready: true, restart count 0 Jan 30 22:54:10.165: INFO: etcd-manager-events-ip-172-20-63-44.sa-east-1.compute.internal started at 2023-01-30 22:36:20 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:10.165: INFO: Container etcd-manager ready: true, restart count 0 Jan 30 22:54:10.165: INFO: etcd-manager-main-ip-172-20-63-44.sa-east-1.compute.internal started at 2023-01-30 22:36:20 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:10.165: INFO: Container etcd-manager ready: true, restart count 0 Jan 30 22:54:10.165: INFO: kube-scheduler-ip-172-20-63-44.sa-east-1.compute.internal started at 2023-01-30 22:36:20 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:10.165: INFO: Container kube-scheduler ready: true, restart count 0 Jan 30 22:54:10.165: INFO: ebs-csi-node-crhx2 started at 2023-01-30 22:37:30 +0000 UTC (0+3 container statuses recorded) Jan 30 22:54:10.165: INFO: Container ebs-plugin ready: true, restart count 0 Jan 30 22:54:10.165: INFO: Container liveness-probe ready: true, restart count 0 Jan 30 22:54:10.165: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 30 22:54:10.165: INFO: cilium-bg2hw started at 2023-01-30 22:37:30 +0000 UTC (1+1 container statuses recorded) Jan 30 22:54:10.165: INFO: Init container clean-cilium-state ready: true, restart count 0 Jan 30 22:54:10.165: INFO: Container cilium-agent ready: true, restart count 0 Jan 30 22:54:10.614: INFO: Latency metrics for node ip-172-20-63-44.sa-east-1.compute.internal Jan 30 22:54:10.614: INFO: Logging node info for node ip-172-20-63-7.sa-east-1.compute.internal Jan 30 22:54:10.756: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-63-7.sa-east-1.compute.internal 8ee09ce8-ad2c-4347-b6b0-a38439fe8b38 7860 0 2023-01-30 22:39:06 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:sa-east-1 failure-domain.beta.kubernetes.io/zone:sa-east-1a kubelet_cleanup:true kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-63-7.sa-east-1.compute.internal kubernetes.io/os:linux kubernetes.io/role:node node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:sa-east-1a topology.hostpath.csi/node:ip-172-20-63-7.sa-east-1.compute.internal topology.kubernetes.io/region:sa-east-1 topology.kubernetes.io/zone:sa-east-1a] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-ephemeral-5132":"ip-172-20-63-7.sa-east-1.compute.internal","ebs.csi.aws.com":"i-02d1af952f8cb9055"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kops-controller Update v1 2023-01-30 22:39:06 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/node":{}}}} } {kube-controller-manager Update v1 2023-01-30 22:39:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.2.0/24\"":{}}}} } {kubelet Update v1 2023-01-30 22:39:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {e2e.test Update v1 2023-01-30 22:43:42 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubelet_cleanup":{}}}} } {kube-controller-manager Update v1 2023-01-30 22:44:35 +0000 UTC FieldsV1 {"f:status":{"f:volumesAttached":{}}} status} {kubelet Update v1 2023-01-30 22:45:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.2.0/24,DoNotUseExternalID:,ProviderID:aws:///sa-east-1a/i-02d1af952f8cb9055,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{49753808896 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4051644416 0} {<nil>} 3956684Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{44778427933 0} {<nil>} 44778427933 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3946786816 0} {<nil>} 3854284Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-30 22:49:42 +0000 UTC,LastTransitionTime:2023-01-30 22:39:06 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-30 22:49:42 +0000 UTC,LastTransitionTime:2023-01-30 22:39:06 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-30 22:49:42 +0000 UTC,LastTransitionTime:2023-01-30 22:39:06 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-30 22:49:42 +0000 UTC,LastTransitionTime:2023-01-30 22:39:27 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.63.7,},NodeAddress{Type:ExternalIP,Address:52.67.57.31,},NodeAddress{Type:Hostname,Address:ip-172-20-63-7.sa-east-1.compute.internal,},NodeAddress{Type:InternalDNS,Address:ip-172-20-63-7.sa-east-1.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-52-67-57-31.sa-east-1.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec292dc0bb9ad655da1bd5cf4f054caa,SystemUUID:ec292dc0-bb9a-d655-da1b-d5cf4f054caa,BootID:3aa9a5e0-6628-460f-859b-942e6b19dc1d,KernelVersion:5.15.0-1028-aws,OSImage:Ubuntu 22.04.1 LTS,ContainerRuntimeVersion:docker://20.10.17,KubeletVersion:v1.23.16,KubeProxyVersion:v1.23.16,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5 quay.io/cilium/cilium:v1.12.5],SizeBytes:456569528,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e k8s.gcr.io/e2e-test-images/agnhost:2.39],SizeBytes:126872991,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.23.16 registry.k8s.io/kube-proxy-amd64:v1.23.16],SizeBytes:110832791,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:f0c5de192d832e7c1daa6580d4a62e8fa6fc8eabc0917ae4cb7ed4d15e95b59e registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.1],SizeBytes:94471559,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0],SizeBytes:56382329,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:4e74c0492bceddc598de1c90cc5bc14dcda94cb49fa9c5bad9d117c4834b5e08 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.1],SizeBytes:56379269,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:36c31f7e1f433c9634d24f876353e8646246d81a03c4e351202c2644daff1620 k8s.gcr.io/sig-storage/csi-resizer:v1.2.0],SizeBytes:53956308,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-snapshotter@sha256:ed98431376c9e944e19a465fe8ea944806714dd95416a0821096c78d66b579bd k8s.gcr.io/sig-storage/csi-snapshotter:v4.1.1],SizeBytes:53531990,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:c5be65d6679efabb969d9b019300d187437ae876f992c40911fd2892bbef3b36 k8s.gcr.io/sig-storage/csi-attacher:v3.2.0],SizeBytes:53500826,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:60ab9b3e6a030d3038c87c0d6bca2930f58d1d72823e6a4af09767dc83b696a2 k8s.gcr.io/sig-storage/csi-attacher:v3.2.1],SizeBytes:53499570,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:7a5ba58a44e0d749e0767e4e37315bcf6a61f33ce3185c1991848af4db0fb70a k8s.gcr.io/sig-storage/csi-resizer:v1.1.0],SizeBytes:49176472,},ContainerImage{Names:[k8s.gcr.io/sig-storage/hostpathplugin@sha256:232fe80174d60d520d36043103853a1d7ab4b7f3782cf43e45034f04ccda58ce k8s.gcr.io/sig-storage/hostpathplugin:v1.7.1],SizeBytes:29977921,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:19662887,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:19582112,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:2dee3fe5fe861bb66c3a4ac51114f3447a4cd35870e0f2e2b558c7a400d89589 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0],SizeBytes:18699436,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:17680993,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:17611980,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:529be2c9770add0cdd0c989115222ea9fc1be430c11095eb9f6dafcf98a36e2b k8s.gcr.io/sig-storage/livenessprobe:v2.4.0],SizeBytes:17175488,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:1154361,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6 registry.k8s.io/pause:3.6],SizeBytes:682696,},},VolumesInUse:[kubernetes.io/csi/ebs.csi.aws.com^vol-0f9eeed3999f6acd3],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/ebs.csi.aws.com^vol-0f9eeed3999f6acd3,DevicePath:,},},Config:nil,},} Jan 30 22:54:10.757: INFO: Logging kubelet events for node ip-172-20-63-7.sa-east-1.compute.internal Jan 30 22:54:10.901: INFO: Logging pods the kubelet thinks is on node ip-172-20-63-7.sa-east-1.compute.internal Jan 30 22:54:11.064: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-9t9kb started at 2023-01-30 22:43:42 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:11.064: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:54:11.064: INFO: ss2-1 started at 2023-01-30 22:45:23 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:11.064: INFO: Container webserver ready: false, restart count 0 Jan 30 22:54:11.064: INFO: httpd started at 2023-01-30 22:52:48 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:11.064: INFO: Container httpd ready: false, restart count 0 Jan 30 22:54:11.064: INFO: cilium-qtf8x started at 2023-01-30 22:39:07 +0000 UTC (1+1 container statuses recorded) Jan 30 22:54:11.064: INFO: Init container clean-cilium-state ready: true, restart count 0 Jan 30 22:54:11.064: INFO: Container cilium-agent ready: true, restart count 0 Jan 30 22:54:11.064: INFO: hostexec-ip-172-20-63-7.sa-east-1.compute.internal-grfw9 started at 2023-01-30 22:45:37 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:11.064: INFO: Container agnhost-container ready: true, restart count 0 Jan 30 22:54:11.064: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-nft6k started at 2023-01-30 22:43:42 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:11.064: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:54:11.064: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-zrntw started at 2023-01-30 22:43:42 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:11.064: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:54:11.064: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-fp6pt started at 2023-01-30 22:43:42 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:11.064: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:54:11.064: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-qbtpc started at 2023-01-30 22:43:43 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:11.064: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:54:11.064: INFO: externalsvc-c5qz7 started at 2023-01-30 22:49:02 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:11.064: INFO: Container externalsvc ready: false, restart count 0 Jan 30 22:54:11.064: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-v8ln8 started at 2023-01-30 22:43:43 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:11.064: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:54:11.064: INFO: csi-mockplugin-0 started at 2023-01-30 22:49:11 +0000 UTC (0+3 container statuses recorded) Jan 30 22:54:11.064: INFO: Container csi-provisioner ready: false, restart count 0 Jan 30 22:54:11.064: INFO: Container driver-registrar ready: false, restart count 0 Jan 30 22:54:11.064: INFO: Container mock ready: false, restart count 0 Jan 30 22:54:11.064: INFO: termination-message-container273f6b47-08fc-4262-a3ab-3617fa0ad4c2 started at 2023-01-30 22:50:49 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:11.064: INFO: Container termination-message-container ready: false, restart count 0 Jan 30 22:54:11.064: INFO: pod-9361d956-3a9e-45fa-92dc-ac8884faccaa started at 2023-01-30 22:50:26 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:11.064: INFO: Container write-pod ready: false, restart count 0 Jan 30 22:54:11.064: INFO: startup-d0748011-46a8-4bb4-9fe0-3c4baf5fbfed started at 2023-01-30 22:49:08 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:11.064: INFO: Container busybox ready: false, restart count 0 Jan 30 22:54:11.064: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-cnpbk started at 2023-01-30 22:43:42 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:11.064: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: false, restart count 0 Jan 30 22:54:11.064: INFO: pod-70b62feb-0f03-4bb7-97a6-9bed39f38a55 started at 2023-01-30 22:50:29 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:11.064: INFO: Container write-pod ready: false, restart count 0 Jan 30 22:54:11.064: INFO: ebs-csi-node-wc6gx started at 2023-01-30 22:39:07 +0000 UTC (0+3 container statuses recorded) Jan 30 22:54:11.064: INFO: Container ebs-plugin ready: true, restart count 0 Jan 30 22:54:11.064: INFO: Container liveness-probe ready: true, restart count 0 Jan 30 22:54:11.064: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 30 22:54:11.064: INFO: rs-4k8s4 started at 2023-01-30 22:49:21 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:11.064: INFO: Container donothing ready: false, restart count 0 Jan 30 22:54:11.064: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-2m4f6 started at 2023-01-30 22:43:43 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:11.064: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:54:11.064: INFO: csi-hostpathplugin-0 started at 2023-01-30 22:43:41 +0000 UTC (0+7 container statuses recorded) Jan 30 22:54:11.064: INFO: Container csi-attacher ready: true, restart count 0 Jan 30 22:54:11.064: INFO: Container csi-provisioner ready: true, restart count 0 Jan 30 22:54:11.064: INFO: Container csi-resizer ready: true, restart count 0 Jan 30 22:54:11.064: INFO: Container csi-snapshotter ready: true, restart count 0 Jan 30 22:54:11.064: INFO: Container hostpath ready: true, restart count 0 Jan 30 22:54:11.064: INFO: Container liveness-probe ready: true, restart count 0 Jan 30 22:54:11.064: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 30 22:54:11.064: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-56dt8 started at 2023-01-30 22:43:42 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:11.064: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:54:11.064: INFO: pod-d8cff309-3d6a-4ce5-9ac9-b57de7155461 started at 2023-01-30 22:45:48 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:11.064: INFO: Container write-pod ready: true, restart count 0 Jan 30 22:54:11.064: INFO: csi-mockplugin-attacher-0 started at 2023-01-30 22:49:11 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:11.064: INFO: Container csi-attacher ready: true, restart count 0 Jan 30 22:54:11.064: INFO: inline-volume-tester-62nrc started at 2023-01-30 22:43:42 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:11.064: INFO: Container csi-volume-tester ready: false, restart count 0 Jan 30 22:54:11.064: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-fq2r6 started at 2023-01-30 22:43:42 +0000 UTC (0+1 container statuses recorded) Jan 30 22:54:11.064: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:54:12.436: INFO: Latency metrics for node ip-172-20-63-7.sa-east-1.compute.internal Jan 30 22:54:12.436: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "services-5017" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:756
Filter through log files
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-node\]\sConfigMap\sshould\sbe\sconsumable\svia\senvironment\svariable\s\[NodeConformance\]\s\[Conformance\]$'
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Jan 30 22:49:31.890: Unexpected error: <*errors.errorString | 0xc0031d4830>: { s: "expected pod \"pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f\" success: Gave up after waiting 5m0s for pod \"pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f\" to be \"Succeeded or Failed\"", } expected pod "pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f" success: Gave up after waiting 5m0s for pod "pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f" to be "Succeeded or Failed" occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:770from junit_12.xml
[BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 30 22:44:17.738: INFO: >>> kubeConfig: /root/.kube/config �[1mSTEP�[0m: Building a namespace api object, basename configmap �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating configMap configmap-6262/configmap-test-532e5187-b311-4f25-a931-6c048852a402 �[1mSTEP�[0m: Creating a pod to test consume configMaps Jan 30 22:44:19.028: INFO: Waiting up to 5m0s for pod "pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f" in namespace "configmap-6262" to be "Succeeded or Failed" Jan 30 22:44:19.170: INFO: Pod "pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f": Phase="Pending", Reason="", readiness=false. Elapsed: 142.452449ms Jan 30 22:44:21.314: INFO: Pod "pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.286094055s Jan 30 22:44:23.457: INFO: Pod "pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.429304238s Jan 30 22:44:25.601: INFO: Pod "pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.573039878s Jan 30 22:44:27.744: INFO: Pod "pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.716557557s Jan 30 22:44:29.888: INFO: Pod "pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f": Phase="Pending", Reason="", readiness=false. Elapsed: 10.860689324s Jan 30 22:44:32.037: INFO: Pod "pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f": Phase="Pending", Reason="", readiness=false. Elapsed: 13.009148213s Jan 30 22:44:34.186: INFO: Pod "pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f": Phase="Pending", Reason="", readiness=false. Elapsed: 15.157902869s Jan 30 22:44:36.330: INFO: Pod "pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f": Phase="Pending", Reason="", readiness=false. Elapsed: 17.30206514s Jan 30 22:44:38.473: INFO: Pod "pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f": Phase="Pending", Reason="", readiness=false. Elapsed: 19.445162305s Jan 30 22:44:40.617: INFO: Pod "pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f": Phase="Pending", Reason="", readiness=false. Elapsed: 21.589272911s Jan 30 22:44:42.761: INFO: Pod "pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f": Phase="Pending", Reason="", readiness=false. Elapsed: 23.733378783s Jan 30 22:44:44.905: INFO: Pod "pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f": Phase="Pending", Reason="", readiness=false. Elapsed: 25.876901313s Jan 30 22:44:47.054: INFO: Pod "pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f": Phase="Pending", Reason="", readiness=false. Elapsed: 28.025965603s Jan 30 22:44:49.198: INFO: Pod "pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f": Phase="Pending", Reason="", readiness=false. Elapsed: 30.170524601s Jan 30 22:44:51.342: INFO: Pod "pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f": Phase="Pending", Reason="", readiness=false. Elapsed: 32.314719603s Jan 30 22:44:53.487: INFO: Pod "pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f": Phase="Pending", Reason="", readiness=false. Elapsed: 34.459109726s Jan 30 22:44:55.631: INFO: Pod "pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f": Phase="Pending", Reason="", readiness=false. Elapsed: 36.602862601s Jan 30 22:44:57.774: INFO: Pod "pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f": Phase="Pending", Reason="", readiness=false. Elapsed: 38.746022403s Jan 30 22:44:59.917: INFO: Pod "pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f": Phase="Pending", Reason="", readiness=false. Elapsed: 40.889547807s Jan 30 22:45:02.062: INFO: Pod "pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f": Phase="Pending", Reason="", readiness=false. Elapsed: 43.034185041s Jan 30 22:45:04.204: INFO: Pod "pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f": Phase="Pending", Reason="", readiness=false. Elapsed: 45.176656828s Jan 30 22:45:06.347: INFO: Pod "pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f": Phase="Pending", Reason="", readiness=false. Elapsed: 47.319364128s Jan 30 22:45:08.490: INFO: Pod "pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f": Phase="Pending", Reason="", readiness=false. Elapsed: 49.462765473s Jan 30 22:45:10.644: INFO: Pod "pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f": Phase="Pending", Reason="", readiness=false. Elapsed: 51.616626123s Jan 30 22:45:12.788: INFO: Pod "pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f": Phase="Pending", Reason="", readiness=false. Elapsed: 53.759897362s Jan 30 22:45:14.931: INFO: Pod "pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f": Phase="Pending", Reason="", readiness=false. Elapsed: 55.903246965s Jan 30 22:45:17.074: INFO: Pod "pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f": Phase="Pending", Reason="", readiness=false. Elapsed: 58.04628999s Jan 30 22:45:19.217: INFO: Pod "pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f": Phase="Pending", Reason="", readiness=false. Elapsed: 1m0.188958188s Jan 30 22:45:21.361: INFO: Pod "pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f": Phase="Pending", Reason="", readiness=false. Elapsed: 1m2.332789271s Jan 30 22:45:23.503: INFO: Pod "pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f": Phase="Pending", Reason="", readiness=false. Elapsed: 1m4.475669229s Jan 30 22:45:25.648: INFO: Pod "pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f": Phase="Pending", Reason="", readiness=false. Elapsed: 1m6.619885395s Jan 30 22:45:27.791: INFO: Pod "pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f": Phase="Pending", Reason="", readiness=false. Elapsed: 1m8.762984348s Jan 30 22:45:29.935: INFO: Pod "pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f": Phase="Pending", Reason="", readiness=false. Elapsed: 1m10.907286708s Jan 30 22:45:32.079: INFO: Pod "pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f": Phase="Pending", Reason="", readiness=false. Elapsed: 1m13.051699046s Jan 30 22:45:34.229: INFO: Pod "pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f": Phase="Pending", Reason="", readiness=false. Elapsed: 1m15.200939289s Jan 30 22:45:36.372: INFO: Pod "pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f": Phase="Pending", Reason="", readiness=false. Elapsed: 1m17.34469916s Jan 30 22:45:38.516: INFO: Pod "pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f": Phase="Pending", Reason="", readiness=false. Elapsed: 1m19.487927761s Jan 30 22:45:40.659: INFO: Pod "pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f": Phase="Pending", Reason="", readiness=false. Elapsed: 1m21.631742814s Jan 30 22:45:42.803: INFO: Pod "pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f": Phase="Pending", Reason="", readiness=false. Elapsed: 1m23.775362498s Jan 30 22:45:44.947: INFO: Pod "pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f": Phase="Pending", Reason="", readiness=false. Elapsed: 1m25.919466062s Jan 30 22:45:47.091: INFO: Pod "pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f": Phase="Pending", Reason="", readiness=false. Elapsed: 1m28.063077924s Jan 30 22:45:49.235: INFO: Pod "pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f": Phase="Pending", Reason="", readiness=false. Elapsed: 1m30.206789077s Jan 30 22:45:51.377: INFO: Pod "pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f": Phase="Pending", Reason="", readiness=false. Elapsed: 1m32.349700415s Jan 30 22:45:53.521: INFO: Pod "pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f": Phase="Pending", Reason="", readiness=false. Elapsed: 1m34.493643565s Jan 30 22:45:55.665: INFO: Pod "pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f": Phase="Pending", Reason="", readiness=false. Elapsed: 1m36.636878778s Jan 30 22:45:57.808: INFO: Pod "pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f": Phase="Pending", Reason="", readiness=false. Elapsed: 1m38.780047352s Jan 30 22:45:59.952: INFO: Pod "pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f": Phase="Pending", Reason="", readiness=false. Elapsed: 1m40.924161022s Jan 30 22:46:02.095: INFO: Pod "pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f": Phase="Pending", Reason="", readiness=false. Elapsed: 1m43.067710534s Jan 30 22:46:04.239: INFO: Pod "pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f": Phase="Pending", Reason="", readiness=false. Elapsed: 1m45.211362925s Jan 30 22:46:06.383: INFO: Pod "pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f": Phase="Pending", Reason="", readiness=false. Elapsed: 1m47.355397691s Jan 30 22:46:08.526: INFO: Pod "pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f": Phase="Pending", Reason="", readiness=false. Elapsed: 1m49.498349258s Jan 30 22:46:10.670: INFO: Pod "pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f": Phase="Pending", Reason="", readiness=false. Elapsed: 1m51.642173331s Jan 30 22:46:12.813: INFO: Pod "pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f": Phase="Pending", Reason="", readiness=false. Elapsed: 1m53.785378995s Jan 30 22:46:14.966: INFO: Pod "pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f": Phase="Pending", Reason="", readiness=false. Elapsed: 1m55.938487018s Jan 30 22:46:17.110: INFO: Pod "pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f": Phase="Pending", Reason="", readiness=false. Elapsed: 1m58.082362342s Jan 30 22:46:19.253: INFO: Pod "pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f": Phase="Pending", Reason="", readiness=false. Elapsed: 2m0.225115426s Jan 30 22:46:21.397: INFO: Pod "pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f": Phase="Pending", Reason="", readiness=false. Elapsed: 2m2.369026156s Jan 30 22:46:23.540: INFO: Pod "pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f": Phase="Pending", Reason="", readiness=false. Elapsed: 2m4.512020492s Jan 30 22:46:25.683: INFO: Pod "pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f": Phase="Pending", Reason="", readiness=false. Elapsed: 2m6.655644922s Jan 30 22:46:27.827: INFO: Pod "pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f": Phase="Pending", Reason="", readiness=false. Elapsed: 2m8.799105673s Jan 30 22:46:29.970: INFO: Pod "pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f": Phase="Pending", Reason="", readiness=false. Elapsed: 2m10.9422785s Jan 30 22:46:32.114: INFO: Pod "pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f": Phase="Pending", Reason="", readiness=false. Elapsed: 2m13.085992097s Jan 30 22:46:34.258: INFO: Pod "pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f": Phase="Pending", Reason="", readiness=false. Elapsed: 2m15.230104574s Jan 30 22:46:36.402: INFO: Pod "pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f": Phase="Pending", Reason="", readiness=false. Elapsed: 2m17.374388512s Jan 30 22:46:38.545: INFO: Pod "pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f": Phase="Pending", Reason="", readiness=false. Elapsed: 2m19.51722897s Jan 30 22:46:40.689: INFO: Pod "pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f": Phase="Pending", Reason="", readiness=false. Elapsed: 2m21.66135178s Jan 30 22:46:42.833: INFO: Pod "pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f": Phase="Pending", Reason="", readiness=false. Elapsed: 2m23.805251955s Jan 30 22:46:44.978: INFO: Pod "pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f": Phase="Pending", Reason="", readiness=false. Elapsed: 2m25.950498451s Jan 30 22:46:47.122: INFO: Pod "pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f": Phase="Pending", Reason="", readiness=false. Elapsed: 2m28.093986121s Jan 30 22:46:49.265: INFO: Pod "pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f": Phase="Pending", Reason="", readiness=false. Elapsed: 2m30.237017659s Jan 30 22:46:51.409: INFO: Pod "pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f": Phase="Pending", Reason="", readiness=false. Elapsed: 2m32.380909885s Jan 30 22:46:53.552: INFO: Pod "pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f": Phase="Pending", Reason="", readiness=false. Elapsed: 2m34.524208936s Jan 30 22:46:55.695: INFO: Pod "pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f": Phase="Pending", Reason="", readiness=false. Elapsed: 2m36.667622763s Jan 30 22:46:57.839: INFO: Pod "pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f": Phase="Pending", Reason="", readiness=false. Elapsed: 2m38.811074066s Jan 30 22:46:59.983: INFO: Pod "pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f": Phase="Pending", Reason="", readiness=false. Elapsed: 2m40.955103091s Jan 30 22:47:02.127: INFO: Pod "pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f": Phase="Pending", Reason="", readiness=false. Elapsed: 2m43.099192322s Jan 30 22:47:04.270: INFO: Pod "pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f": Phase="Pending", Reason="", readiness=false. Elapsed: 2m45.242593899s Jan 30 22:47:06.414: INFO: Pod "pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f": Phase="Pending", Reason="", readiness=false. Elapsed: 2m47.385806611s Jan 30 22:47:08.557: INFO: Pod "pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f": Phase="Pending", Reason="", readiness=false. Elapsed: 2m49.52885054s Jan 30 22:47:10.701: INFO: Pod "pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f": Phase="Pending", Reason="", readiness=false. Elapsed: 2m51.673402445s Jan 30 22:47:12.844: INFO: Pod "pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f": Phase="Pending", Reason="", readiness=false. Elapsed: 2m53.816579588s Jan 30 22:47:14.988: INFO: Pod "pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f": Phase="Pending", Reason="", readiness=false. Elapsed: 2m55.960658829s Jan 30 22:47:17.131: INFO: Pod "pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f": Phase="Pending", Reason="", readiness=false. Elapsed: 2m58.103707981s Jan 30 22:47:19.278: INFO: Pod "pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f": Phase="Pending", Reason="", readiness=false. Elapsed: 3m0.249780454s Jan 30 22:47:21.421: INFO: Pod "pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f": Phase="Pending", Reason="", readiness=false. Elapsed: 3m2.39376085s Jan 30 22:47:23.565: INFO: Pod "pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f": Phase="Pending", Reason="", readiness=false. Elapsed: 3m4.537077444s Jan 30 22:47:25.708: INFO: Pod "pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f": Phase="Pending", Reason="", readiness=false. Elapsed: 3m6.680550344s Jan 30 22:47:27.851: INFO: Pod "pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f": Phase="Pending", Reason="", readiness=false. Elapsed: 3m8.823580527s Jan 30 22:47:29.995: INFO: Pod "pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f": Phase="Pending", Reason="", readiness=false. Elapsed: 3m10.966830654s Jan 30 22:47:32.138: INFO: Pod "pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f": Phase="Pending", Reason="", readiness=false. Elapsed: 3m13.110650239s Jan 30 22:47:34.282: INFO: Pod "pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f": Phase="Pending", Reason="", readiness=false. Elapsed: 3m15.253948933s Jan 30 22:47:36.426: INFO: Pod "pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f": Phase="Pending", Reason="", readiness=false. Elapsed: 3m17.397778328s Jan 30 22:47:38.569: INFO: Pod "pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f": Phase="Pending", Reason="", readiness=false. Elapsed: 3m19.541111374s Jan 30 22:47:40.712: INFO: Pod "pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f": Phase="Pending", Reason="", readiness=false. Elapsed: 3m21.684372496s Jan 30 22:47:42.856: INFO: Pod "pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f": Phase="Pending", Reason="", readiness=false. Elapsed: 3m23.82795175s Jan 30 22:47:44.999: INFO: Pod "pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f": Phase="Pending", Reason="", readiness=false. Elapsed: 3m25.971645565s Jan 30 22:47:47.143: INFO: Pod "pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f": Phase="Pending", Reason="", readiness=false. Elapsed: 3m28.115308238s Jan 30 22:47:49.286: INFO: Pod "pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f": Phase="Pending", Reason="", readiness=false. Elapsed: 3m30.258528798s Jan 30 22:47:51.429: INFO: Pod "pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f": Phase="Pending", Reason="", readiness=false. Elapsed: 3m32.401586182s Jan 30 22:47:53.573: INFO: Pod "pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f": Phase="Pending", Reason="", readiness=false. Elapsed: 3m34.545361277s Jan 30 22:47:55.716: INFO: Pod "pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f": Phase="Pending", Reason="", readiness=false. Elapsed: 3m36.688107686s Jan 30 22:47:57.860: INFO: Pod "pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f": Phase="Pending", Reason="", readiness=false. Elapsed: 3m38.831833132s Jan 30 22:48:00.003: INFO: Pod "pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f": Phase="Pending", Reason="", readiness=false. Elapsed: 3m40.975430478s Jan 30 22:48:02.147: INFO: Pod "pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f": Phase="Pending", Reason="", readiness=false. Elapsed: 3m43.119124125s Jan 30 22:48:04.290: INFO: Pod "pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f": Phase="Pending", Reason="", readiness=false. Elapsed: 3m45.262083934s Jan 30 22:48:06.434: INFO: Pod "pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f": Phase="Pending", Reason="", readiness=false. Elapsed: 3m47.405938544s Jan 30 22:48:08.579: INFO: Pod "pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f": Phase="Pending", Reason="", readiness=false. Elapsed: 3m49.550789525s Jan 30 22:48:10.721: INFO: Pod "pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f": Phase="Pending", Reason="", readiness=false. Elapsed: 3m51.693630981s Jan 30 22:48:12.864: INFO: Pod "pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f": Phase="Pending", Reason="", readiness=false. Elapsed: 3m53.836771335s Jan 30 22:48:15.008: INFO: Pod "pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f": Phase="Pending", Reason="", readiness=false. Elapsed: 3m55.980460522s Jan 30 22:48:17.152: INFO: Pod "pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f": Phase="Pending", Reason="", readiness=false. Elapsed: 3m58.124375135s Jan 30 22:48:19.295: INFO: Pod "pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f": Phase="Pending", Reason="", readiness=false. Elapsed: 4m0.267498367s Jan 30 22:48:21.439: INFO: Pod "pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f": Phase="Pending", Reason="", readiness=false. Elapsed: 4m2.411156637s Jan 30 22:48:23.583: INFO: Pod "pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f": Phase="Pending", Reason="", readiness=false. Elapsed: 4m4.554921072s Jan 30 22:48:25.726: INFO: Pod "pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f": Phase="Pending", Reason="", readiness=false. Elapsed: 4m6.698060682s Jan 30 22:48:27.869: INFO: Pod "pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f": Phase="Pending", Reason="", readiness=false. Elapsed: 4m8.841027879s Jan 30 22:48:30.013: INFO: Pod "pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f": Phase="Pending", Reason="", readiness=false. Elapsed: 4m10.985305865s Jan 30 22:48:32.157: INFO: Pod "pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f": Phase="Pending", Reason="", readiness=false. Elapsed: 4m13.12886569s Jan 30 22:48:34.301: INFO: Pod "pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f": Phase="Pending", Reason="", readiness=false. Elapsed: 4m15.273136364s Jan 30 22:48:36.444: INFO: Pod "pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f": Phase="Pending", Reason="", readiness=false. Elapsed: 4m17.416649022s Jan 30 22:48:38.588: INFO: Pod "pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f": Phase="Pending", Reason="", readiness=false. Elapsed: 4m19.559936945s Jan 30 22:48:40.731: INFO: Pod "pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f": Phase="Pending", Reason="", readiness=false. Elapsed: 4m21.703295083s Jan 30 22:48:42.874: INFO: Pod "pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f": Phase="Pending", Reason="", readiness=false. Elapsed: 4m23.84609535s Jan 30 22:48:45.017: INFO: Pod "pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f": Phase="Pending", Reason="", readiness=false. Elapsed: 4m25.989532361s Jan 30 22:48:47.161: INFO: Pod "pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f": Phase="Pending", Reason="", readiness=false. Elapsed: 4m28.13346299s Jan 30 22:48:49.305: INFO: Pod "pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f": Phase="Pending", Reason="", readiness=false. Elapsed: 4m30.276895957s Jan 30 22:48:51.448: INFO: Pod "pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f": Phase="Pending", Reason="", readiness=false. Elapsed: 4m32.419871115s Jan 30 22:48:53.591: INFO: Pod "pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f": Phase="Pending", Reason="", readiness=false. Elapsed: 4m34.563405731s Jan 30 22:48:55.735: INFO: Pod "pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f": Phase="Pending", Reason="", readiness=false. Elapsed: 4m36.706961158s Jan 30 22:48:57.878: INFO: Pod "pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f": Phase="Pending", Reason="", readiness=false. Elapsed: 4m38.850034582s Jan 30 22:49:00.021: INFO: Pod "pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f": Phase="Pending", Reason="", readiness=false. Elapsed: 4m40.99377239s Jan 30 22:49:02.165: INFO: Pod "pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f": Phase="Pending", Reason="", readiness=false. Elapsed: 4m43.137768053s Jan 30 22:49:04.310: INFO: Pod "pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f": Phase="Pending", Reason="", readiness=false. Elapsed: 4m45.281990015s Jan 30 22:49:06.453: INFO: Pod "pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f": Phase="Pending", Reason="", readiness=false. Elapsed: 4m47.425222475s Jan 30 22:49:08.596: INFO: Pod "pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f": Phase="Pending", Reason="", readiness=false. Elapsed: 4m49.568597722s Jan 30 22:49:10.742: INFO: Pod "pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f": Phase="Pending", Reason="", readiness=false. Elapsed: 4m51.714395829s Jan 30 22:49:12.886: INFO: Pod "pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f": Phase="Pending", Reason="", readiness=false. Elapsed: 4m53.858077133s Jan 30 22:49:15.029: INFO: Pod "pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f": Phase="Pending", Reason="", readiness=false. Elapsed: 4m56.001032314s Jan 30 22:49:17.172: INFO: Pod "pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f": Phase="Pending", Reason="", readiness=false. Elapsed: 4m58.143830921s Jan 30 22:49:19.458: INFO: Failed to get logs from node "ip-172-20-63-7.sa-east-1.compute.internal" pod "pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f" container "env-test": the server rejected our request for an unknown reason (get pods pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f) �[1mSTEP�[0m: delete the pod Jan 30 22:49:19.604: INFO: Waiting for pod pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f to disappear Jan 30 22:49:19.747: INFO: Pod pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f still exists Jan 30 22:49:21.748: INFO: Waiting for pod pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f to disappear Jan 30 22:49:21.890: INFO: Pod pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f still exists Jan 30 22:49:23.747: INFO: Waiting for pod pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f to disappear Jan 30 22:49:23.889: INFO: Pod pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f still exists Jan 30 22:49:25.747: INFO: Waiting for pod pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f to disappear Jan 30 22:49:25.890: INFO: Pod pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f still exists Jan 30 22:49:27.747: INFO: Waiting for pod pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f to disappear Jan 30 22:49:27.890: INFO: Pod pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f still exists Jan 30 22:49:29.747: INFO: Waiting for pod pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f to disappear Jan 30 22:49:29.890: INFO: Pod pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f still exists Jan 30 22:49:31.748: INFO: Waiting for pod pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f to disappear Jan 30 22:49:31.890: INFO: Pod pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f no longer exists Jan 30 22:49:31.890: FAIL: Unexpected error: <*errors.errorString | 0xc0031d4830>: { s: "expected pod \"pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f\" success: Gave up after waiting 5m0s for pod \"pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f\" to be \"Succeeded or Failed\"", } expected pod "pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f" success: Gave up after waiting 5m0s for pod "pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f" to be "Succeeded or Failed" occurred Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).testContainerOutputMatcher(0x7050d8c?, {0x707dc55?, 0xc00328f190?}, 0xc003dc4800, 0x0, {0xc00328f160, 0x1, 0x1}, 0x0?) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:770 +0x176 k8s.io/kubernetes/test/e2e/framework.(*Framework).TestContainerOutput(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:567 k8s.io/kubernetes/test/e2e/common/node.glob..func1.1() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/configmap.go:80 +0x845 k8s.io/kubernetes/test/e2e.RunE2ETests(0x24eec17?) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:133 +0x6b7 k8s.io/kubernetes/test/e2e.TestE2E(0x24602d9?) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc000102680, 0x72ecb90) /usr/local/go/src/testing/testing.go:1446 +0x10b created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1493 +0x35f [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 �[1mSTEP�[0m: Collecting events from namespace "configmap-6262". �[1mSTEP�[0m: Found 12 events. Jan 30 22:49:32.043: INFO: At 2023-01-30 22:44:18 +0000 UTC - event for pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f: {default-scheduler } Scheduled: Successfully assigned configmap-6262/pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f to ip-172-20-63-7.sa-east-1.compute.internal Jan 30 22:49:32.043: INFO: At 2023-01-30 22:44:21 +0000 UTC - event for pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f: {kubelet ip-172-20-63-7.sa-east-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "fc1f144bbbcf4822c41352293775a1054af49a961e66152bea10dccbd31b4da0" network for pod "pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f": networkPlugin cni failed to set up pod "pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f_configmap-6262" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 30 22:49:32.043: INFO: At 2023-01-30 22:44:23 +0000 UTC - event for pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f: {kubelet ip-172-20-63-7.sa-east-1.compute.internal} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 22:49:32.043: INFO: At 2023-01-30 22:44:24 +0000 UTC - event for pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f: {kubelet ip-172-20-63-7.sa-east-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "9ac27ecec4f195fb7cf0c3465b6a7e6e27c80a10e066a538343e4ee06f50f66f" network for pod "pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f": networkPlugin cni failed to set up pod "pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f_configmap-6262" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 30 22:49:32.043: INFO: At 2023-01-30 22:44:27 +0000 UTC - event for pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f: {kubelet ip-172-20-63-7.sa-east-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "450c4199b21834cfbda47b6a1a670ba0e38074c6bc5aec9f667448c26d752232" network for pod "pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f": networkPlugin cni failed to set up pod "pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f_configmap-6262" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 30 22:49:32.043: INFO: At 2023-01-30 22:44:30 +0000 UTC - event for pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f: {kubelet ip-172-20-63-7.sa-east-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "9b952be3212afd835415e26a79b4df77afe60ffa54d65b8591762899b2841525" network for pod "pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f": networkPlugin cni failed to set up pod "pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f_configmap-6262" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 30 22:49:32.043: INFO: At 2023-01-30 22:44:35 +0000 UTC - event for pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f: {kubelet ip-172-20-63-7.sa-east-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "4219a54317a5eeb6d27e45b19bc18c94240696beeafed9f02df3cc684bfc4704" network for pod "pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f": networkPlugin cni failed to set up pod "pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f_configmap-6262" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 30 22:49:32.043: INFO: At 2023-01-30 22:44:40 +0000 UTC - event for pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f: {kubelet ip-172-20-63-7.sa-east-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "1d4506ef92cecd6678f41dfedb36939a803207a7864bebfdd2f08047cc176d6a" network for pod "pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f": networkPlugin cni failed to set up pod "pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f_configmap-6262" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 30 22:49:32.043: INFO: At 2023-01-30 22:44:43 +0000 UTC - event for pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f: {kubelet ip-172-20-63-7.sa-east-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "dc8028e7132acb1e7159ecdb3d9a24c7c640bb767d36643f05683dff8f481815" network for pod "pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f": networkPlugin cni failed to set up pod "pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f_configmap-6262" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 30 22:49:32.043: INFO: At 2023-01-30 22:44:46 +0000 UTC - event for pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f: {kubelet ip-172-20-63-7.sa-east-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "4c832846942d71b828ae38995999a33edc387d4b139c9ed28959995e77a3d60c" network for pod "pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f": networkPlugin cni failed to set up pod "pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f_configmap-6262" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 30 22:49:32.043: INFO: At 2023-01-30 22:44:49 +0000 UTC - event for pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f: {kubelet ip-172-20-63-7.sa-east-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "a96e5702236932a38338267d919e601468c273f2c935f2b96d6dc3862449119f" network for pod "pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f": networkPlugin cni failed to set up pod "pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f_configmap-6262" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 30 22:49:32.043: INFO: At 2023-01-30 22:45:13 +0000 UTC - event for pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f: {kubelet ip-172-20-63-7.sa-east-1.compute.internal} FailedCreatePodSandBox: (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "e61ead94f811d8702c202681db9fa310bcaf9934a4fed9e2f841b2f848ea894c" network for pod "pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f": networkPlugin cni failed to set up pod "pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f_configmap-6262" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 30 22:49:32.186: INFO: POD NODE PHASE GRACE CONDITIONS Jan 30 22:49:32.186: INFO: Jan 30 22:49:32.334: INFO: Logging node info for node ip-172-20-37-244.sa-east-1.compute.internal Jan 30 22:49:32.476: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-37-244.sa-east-1.compute.internal 1be0c21f-5cd5-49c3-937b-dcb7d30e890a 6805 0 2023-01-30 22:39:08 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:sa-east-1 failure-domain.beta.kubernetes.io/zone:sa-east-1a kubelet_cleanup:true kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-37-244.sa-east-1.compute.internal kubernetes.io/os:linux kubernetes.io/role:node node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:sa-east-1a topology.hostpath.csi/node:ip-172-20-37-244.sa-east-1.compute.internal topology.kubernetes.io/region:sa-east-1 topology.kubernetes.io/zone:sa-east-1a] map[csi.volume.kubernetes.io/nodeid:{"ebs.csi.aws.com":"i-02de6750f6f07da4c"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kops-controller Update v1 2023-01-30 22:39:08 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/node":{}}}} } {kube-controller-manager Update v1 2023-01-30 22:39:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.4.0/24\"":{}}}} } {kubelet Update v1 2023-01-30 22:39:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {e2e.test Update v1 2023-01-30 22:43:41 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubelet_cleanup":{}}}} } {kubelet Update v1 2023-01-30 22:49:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.4.0/24,DoNotUseExternalID:,ProviderID:aws:///sa-east-1a/i-02de6750f6f07da4c,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{49753808896 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4051644416 0} {<nil>} 3956684Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{44778427933 0} {<nil>} 44778427933 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3946786816 0} {<nil>} 3854284Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-30 22:49:11 +0000 UTC,LastTransitionTime:2023-01-30 22:39:08 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-30 22:49:11 +0000 UTC,LastTransitionTime:2023-01-30 22:39:08 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-30 22:49:11 +0000 UTC,LastTransitionTime:2023-01-30 22:39:08 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-30 22:49:11 +0000 UTC,LastTransitionTime:2023-01-30 22:39:28 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.37.244,},NodeAddress{Type:ExternalIP,Address:54.232.162.137,},NodeAddress{Type:Hostname,Address:ip-172-20-37-244.sa-east-1.compute.internal,},NodeAddress{Type:InternalDNS,Address:ip-172-20-37-244.sa-east-1.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-54-232-162-137.sa-east-1.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec2350fb0335a8c0068ce4bddeab7362,SystemUUID:ec2350fb-0335-a8c0-068c-e4bddeab7362,BootID:80522224-50f0-4d12-bc36-a8ad10d0e9d2,KernelVersion:5.15.0-1028-aws,OSImage:Ubuntu 22.04.1 LTS,ContainerRuntimeVersion:docker://20.10.17,KubeletVersion:v1.23.16,KubeProxyVersion:v1.23.16,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5 quay.io/cilium/cilium:v1.12.5],SizeBytes:456569528,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e k8s.gcr.io/e2e-test-images/agnhost:2.39],SizeBytes:126872991,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.23.16 registry.k8s.io/kube-proxy-amd64:v1.23.16],SizeBytes:110832791,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:f0c5de192d832e7c1daa6580d4a62e8fa6fc8eabc0917ae4cb7ed4d15e95b59e registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.1],SizeBytes:94471559,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:4e74c0492bceddc598de1c90cc5bc14dcda94cb49fa9c5bad9d117c4834b5e08 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.1],SizeBytes:56379269,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:36c31f7e1f433c9634d24f876353e8646246d81a03c4e351202c2644daff1620 k8s.gcr.io/sig-storage/csi-resizer:v1.2.0],SizeBytes:53956308,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-snapshotter@sha256:ed98431376c9e944e19a465fe8ea944806714dd95416a0821096c78d66b579bd k8s.gcr.io/sig-storage/csi-snapshotter:v4.1.1],SizeBytes:53531990,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:60ab9b3e6a030d3038c87c0d6bca2930f58d1d72823e6a4af09767dc83b696a2 k8s.gcr.io/sig-storage/csi-attacher:v3.2.1],SizeBytes:53499570,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:50249337,},ContainerImage{Names:[k8s.gcr.io/sig-storage/hostpathplugin@sha256:232fe80174d60d520d36043103853a1d7ab4b7f3782cf43e45034f04ccda58ce k8s.gcr.io/sig-storage/hostpathplugin:v1.7.1],SizeBytes:29977921,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:19582112,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:2dee3fe5fe861bb66c3a4ac51114f3447a4cd35870e0f2e2b558c7a400d89589 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0],SizeBytes:18699436,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:17611980,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:529be2c9770add0cdd0c989115222ea9fc1be430c11095eb9f6dafcf98a36e2b k8s.gcr.io/sig-storage/livenessprobe:v2.4.0],SizeBytes:17175488,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:1154361,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6 registry.k8s.io/pause:3.6],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 30 22:49:32.477: INFO: Logging kubelet events for node ip-172-20-37-244.sa-east-1.compute.internal Jan 30 22:49:32.628: INFO: Logging pods the kubelet thinks is on node ip-172-20-37-244.sa-east-1.compute.internal Jan 30 22:49:32.921: INFO: cilium-2kmmh started at 2023-01-30 22:39:09 +0000 UTC (1+1 container statuses recorded) Jan 30 22:49:32.921: INFO: Init container clean-cilium-state ready: true, restart count 0 Jan 30 22:49:32.921: INFO: Container cilium-agent ready: true, restart count 0 Jan 30 22:49:32.921: INFO: coredns-867df8f45c-q48mf started at 2023-01-30 22:39:48 +0000 UTC (0+1 container statuses recorded) Jan 30 22:49:32.921: INFO: Container coredns ready: true, restart count 0 Jan 30 22:49:32.921: INFO: service-proxy-toggled-n48z6 started at 2023-01-30 22:43:22 +0000 UTC (0+1 container statuses recorded) Jan 30 22:49:32.921: INFO: Container service-proxy-toggled ready: true, restart count 0 Jan 30 22:49:32.921: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-jvpvp started at 2023-01-30 22:43:42 +0000 UTC (0+1 container statuses recorded) Jan 30 22:49:32.921: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:49:32.921: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-xbf2p started at 2023-01-30 22:43:43 +0000 UTC (0+1 container statuses recorded) Jan 30 22:49:32.921: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:49:32.921: INFO: rs-hh4qw started at 2023-01-30 22:49:21 +0000 UTC (0+1 container statuses recorded) Jan 30 22:49:32.921: INFO: Container donothing ready: false, restart count 0 Jan 30 22:49:32.921: INFO: ss2-1 started at 2023-01-30 22:43:37 +0000 UTC (0+1 container statuses recorded) Jan 30 22:49:32.921: INFO: Container webserver ready: true, restart count 0 Jan 30 22:49:32.921: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-5jrwl started at 2023-01-30 22:43:42 +0000 UTC (0+1 container statuses recorded) Jan 30 22:49:32.921: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:49:32.921: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-g7dxk started at 2023-01-30 22:43:42 +0000 UTC (0+1 container statuses recorded) Jan 30 22:49:32.921: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:49:32.921: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-x4sjr started at 2023-01-30 22:43:43 +0000 UTC (0+1 container statuses recorded) Jan 30 22:49:32.921: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:49:32.921: INFO: ebs-csi-node-wwnfq started at 2023-01-30 22:39:09 +0000 UTC (0+3 container statuses recorded) Jan 30 22:49:32.921: INFO: Container ebs-plugin ready: true, restart count 0 Jan 30 22:49:32.921: INFO: Container liveness-probe ready: true, restart count 0 Jan 30 22:49:32.921: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 30 22:49:32.921: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-7jbrf started at 2023-01-30 22:43:43 +0000 UTC (0+1 container statuses recorded) Jan 30 22:49:32.921: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: false, restart count 0 Jan 30 22:49:32.921: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-jxx2t started at 2023-01-30 22:43:42 +0000 UTC (0+1 container statuses recorded) Jan 30 22:49:32.921: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:49:32.921: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-5qz9q started at 2023-01-30 22:43:42 +0000 UTC (0+1 container statuses recorded) Jan 30 22:49:32.921: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:49:32.921: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-tvmsg started at 2023-01-30 22:43:42 +0000 UTC (0+1 container statuses recorded) Jan 30 22:49:32.921: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:49:32.921: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-vpcj2 started at 2023-01-30 22:43:43 +0000 UTC (0+1 container statuses recorded) Jan 30 22:49:32.921: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:49:33.375: INFO: Latency metrics for node ip-172-20-37-244.sa-east-1.compute.internal Jan 30 22:49:33.375: INFO: Logging node info for node ip-172-20-46-143.sa-east-1.compute.internal Jan 30 22:49:33.518: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-46-143.sa-east-1.compute.internal 4ac0f2fd-a06b-4650-9c4a-c2964727bf42 6657 0 2023-01-30 22:39:07 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:sa-east-1 failure-domain.beta.kubernetes.io/zone:sa-east-1a kubelet_cleanup:true kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-46-143.sa-east-1.compute.internal kubernetes.io/os:linux kubernetes.io/role:node node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:sa-east-1a topology.kubernetes.io/region:sa-east-1 topology.kubernetes.io/zone:sa-east-1a] map[csi.volume.kubernetes.io/nodeid:{"ebs.csi.aws.com":"i-0549a01609c77b117"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kops-controller Update v1 2023-01-30 22:39:07 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/node":{}}}} } {kube-controller-manager Update v1 2023-01-30 22:39:07 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.3.0/24\"":{}}}} } {kubelet Update v1 2023-01-30 22:39:07 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {e2e.test Update v1 2023-01-30 22:43:41 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubelet_cleanup":{}}}} } {kube-controller-manager Update v1 2023-01-30 22:49:03 +0000 UTC FieldsV1 {"f:status":{"f:volumesAttached":{}}} status} {kubelet Update v1 2023-01-30 22:49:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.3.0/24,DoNotUseExternalID:,ProviderID:aws:///sa-east-1a/i-0549a01609c77b117,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{49753808896 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4051652608 0} {<nil>} 3956692Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{44778427933 0} {<nil>} 44778427933 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3946795008 0} {<nil>} 3854292Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-30 22:49:11 +0000 UTC,LastTransitionTime:2023-01-30 22:39:07 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-30 22:49:11 +0000 UTC,LastTransitionTime:2023-01-30 22:39:07 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-30 22:49:11 +0000 UTC,LastTransitionTime:2023-01-30 22:39:07 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-30 22:49:11 +0000 UTC,LastTransitionTime:2023-01-30 22:39:28 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.46.143,},NodeAddress{Type:ExternalIP,Address:18.230.23.25,},NodeAddress{Type:Hostname,Address:ip-172-20-46-143.sa-east-1.compute.internal,},NodeAddress{Type:InternalDNS,Address:ip-172-20-46-143.sa-east-1.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-18-230-23-25.sa-east-1.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec25bac6007d23dab6609e76a6663500,SystemUUID:ec25bac6-007d-23da-b660-9e76a6663500,BootID:cd72b157-4d78-4df9-997f-bab559376690,KernelVersion:5.15.0-1028-aws,OSImage:Ubuntu 22.04.1 LTS,ContainerRuntimeVersion:docker://20.10.17,KubeletVersion:v1.23.16,KubeProxyVersion:v1.23.16,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5 quay.io/cilium/cilium:v1.12.5],SizeBytes:456569528,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:20f25f275d46aa728f7615a1ccc19c78b2ed89435bf943a44b339f70f45508e6 k8s.gcr.io/e2e-test-images/httpd:2.4.39-2],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e k8s.gcr.io/e2e-test-images/agnhost:2.39],SizeBytes:126872991,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.23.16 registry.k8s.io/kube-proxy-amd64:v1.23.16],SizeBytes:110832791,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:f0c5de192d832e7c1daa6580d4a62e8fa6fc8eabc0917ae4cb7ed4d15e95b59e registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.1],SizeBytes:94471559,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:19582112,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:17611980,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 k8s.gcr.io/e2e-test-images/nginx:1.14-2],SizeBytes:16032814,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:1154361,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6 registry.k8s.io/pause:3.6],SizeBytes:682696,},},VolumesInUse:[kubernetes.io/csi/ebs.csi.aws.com^vol-02a6ee60ede372824],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/ebs.csi.aws.com^vol-02a6ee60ede372824,DevicePath:,},},Config:nil,},} Jan 30 22:49:33.519: INFO: Logging kubelet events for node ip-172-20-46-143.sa-east-1.compute.internal Jan 30 22:49:33.664: INFO: Logging pods the kubelet thinks is on node ip-172-20-46-143.sa-east-1.compute.internal Jan 30 22:49:33.955: INFO: ss2-0 started at 2023-01-30 22:46:20 +0000 UTC (0+1 container statuses recorded) Jan 30 22:49:33.955: INFO: Container webserver ready: false, restart count 0 Jan 30 22:49:33.955: INFO: hostexec-ip-172-20-46-143.sa-east-1.compute.internal-s8w47 started at 2023-01-30 22:48:52 +0000 UTC (0+1 container statuses recorded) Jan 30 22:49:33.955: INFO: Container agnhost-container ready: true, restart count 0 Jan 30 22:49:33.955: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-2sw7v started at 2023-01-30 22:43:43 +0000 UTC (0+1 container statuses recorded) Jan 30 22:49:33.955: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:49:33.955: INFO: rs-8d5pg started at 2023-01-30 22:49:21 +0000 UTC (0+1 container statuses recorded) Jan 30 22:49:33.955: INFO: Container donothing ready: false, restart count 0 Jan 30 22:49:33.955: INFO: service-proxy-disabled-qfmmj started at 2023-01-30 22:42:49 +0000 UTC (0+1 container statuses recorded) Jan 30 22:49:33.955: INFO: Container service-proxy-disabled ready: true, restart count 0 Jan 30 22:49:33.955: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-q6pfk started at 2023-01-30 22:43:42 +0000 UTC (0+1 container statuses recorded) Jan 30 22:49:33.955: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:49:33.955: INFO: cilium-m624g started at 2023-01-30 22:39:08 +0000 UTC (1+1 container statuses recorded) Jan 30 22:49:33.955: INFO: Init container clean-cilium-state ready: true, restart count 0 Jan 30 22:49:33.955: INFO: Container cilium-agent ready: true, restart count 0 Jan 30 22:49:33.955: INFO: externalsvc-qmzsv started at 2023-01-30 22:49:02 +0000 UTC (0+1 container statuses recorded) Jan 30 22:49:33.955: INFO: Container externalsvc ready: false, restart count 0 Jan 30 22:49:33.955: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-bxpzz started at 2023-01-30 22:43:42 +0000 UTC (0+1 container statuses recorded) Jan 30 22:49:33.955: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:49:33.955: INFO: ebs-csi-node-qjvfh started at 2023-01-30 22:39:08 +0000 UTC (0+3 container statuses recorded) Jan 30 22:49:33.955: INFO: Container ebs-plugin ready: true, restart count 0 Jan 30 22:49:33.955: INFO: Container liveness-probe ready: true, restart count 0 Jan 30 22:49:33.955: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 30 22:49:33.955: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-pt29g started at 2023-01-30 22:43:42 +0000 UTC (0+1 container statuses recorded) Jan 30 22:49:33.955: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:49:33.955: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-2sx9d started at 2023-01-30 22:43:43 +0000 UTC (0+1 container statuses recorded) Jan 30 22:49:33.955: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: false, restart count 0 Jan 30 22:49:33.955: INFO: pod-cf5ae510-5ee5-443b-b0c3-086ca0deda69 started at 2023-01-30 22:49:31 +0000 UTC (0+1 container statuses recorded) Jan 30 22:49:33.955: INFO: Container write-pod ready: false, restart count 0 Jan 30 22:49:33.955: INFO: ss2-2 started at 2023-01-30 22:44:16 +0000 UTC (0+1 container statuses recorded) Jan 30 22:49:33.955: INFO: Container webserver ready: true, restart count 0 Jan 30 22:49:33.955: INFO: service-proxy-toggled-skcf6 started at 2023-01-30 22:43:22 +0000 UTC (0+1 container statuses recorded) Jan 30 22:49:33.955: INFO: Container service-proxy-toggled ready: true, restart count 0 Jan 30 22:49:33.955: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-2g6gh started at 2023-01-30 22:43:42 +0000 UTC (0+1 container statuses recorded) Jan 30 22:49:33.955: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:49:33.955: INFO: hostexec-ip-172-20-46-143.sa-east-1.compute.internal-g66zc started at 2023-01-30 22:49:26 +0000 UTC (0+1 container statuses recorded) Jan 30 22:49:33.955: INFO: Container agnhost-container ready: true, restart count 0 Jan 30 22:49:33.955: INFO: busybox-a7af2acc-d391-47b9-a765-65f447f16b43 started at 2023-01-30 22:47:38 +0000 UTC (0+1 container statuses recorded) Jan 30 22:49:33.955: INFO: Container busybox ready: false, restart count 0 Jan 30 22:49:33.955: INFO: pod-5e8df40f-c0b1-42a7-81f6-8305557f6e9f started at 2023-01-30 22:49:01 +0000 UTC (0+1 container statuses recorded) Jan 30 22:49:33.955: INFO: Container write-pod ready: false, restart count 0 Jan 30 22:49:33.955: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-8w4h2 started at 2023-01-30 22:43:42 +0000 UTC (0+1 container statuses recorded) Jan 30 22:49:33.955: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:49:33.955: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-4cqsj started at 2023-01-30 22:43:43 +0000 UTC (0+1 container statuses recorded) Jan 30 22:49:33.955: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:49:33.955: INFO: pod-subpath-test-preprovisionedpv-z9fs started at 2023-01-30 22:49:01 +0000 UTC (2+1 container statuses recorded) Jan 30 22:49:33.955: INFO: Init container init-volume-preprovisionedpv-z9fs ready: false, restart count 0 Jan 30 22:49:33.955: INFO: Init container test-init-subpath-preprovisionedpv-z9fs ready: false, restart count 0 Jan 30 22:49:33.955: INFO: Container test-container-subpath-preprovisionedpv-z9fs ready: false, restart count 0 Jan 30 22:49:33.955: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-dw5bz started at 2023-01-30 22:43:42 +0000 UTC (0+1 container statuses recorded) Jan 30 22:49:33.955: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:49:33.955: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-jh598 started at 2023-01-30 22:43:42 +0000 UTC (0+1 container statuses recorded) Jan 30 22:49:33.955: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:49:34.421: INFO: Latency metrics for node ip-172-20-46-143.sa-east-1.compute.internal Jan 30 22:49:34.422: INFO: Logging node info for node ip-172-20-56-33.sa-east-1.compute.internal Jan 30 22:49:34.564: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-56-33.sa-east-1.compute.internal 954986f9-8a0c-45d3-a91c-b10fd929b91d 6773 0 2023-01-30 22:39:06 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:sa-east-1 failure-domain.beta.kubernetes.io/zone:sa-east-1a kubelet_cleanup:true kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-56-33.sa-east-1.compute.internal kubernetes.io/os:linux kubernetes.io/role:node node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:sa-east-1a topology.kubernetes.io/region:sa-east-1 topology.kubernetes.io/zone:sa-east-1a] map[csi.volume.kubernetes.io/nodeid:{"ebs.csi.aws.com":"i-09e0b8ffb97d8ede2"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kops-controller Update v1 2023-01-30 22:39:06 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/node":{}}}} } {kubelet Update v1 2023-01-30 22:39:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-30 22:39:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.1.0/24\"":{}}}} } {e2e.test Update v1 2023-01-30 22:43:42 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubelet_cleanup":{}}}} } {kubelet Update v1 2023-01-30 22:44:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.1.0/24,DoNotUseExternalID:,ProviderID:aws:///sa-east-1a/i-09e0b8ffb97d8ede2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{49753808896 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4051644416 0} {<nil>} 3956684Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{44778427933 0} {<nil>} 44778427933 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3946786816 0} {<nil>} 3854284Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-30 22:49:18 +0000 UTC,LastTransitionTime:2023-01-30 22:39:06 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-30 22:49:18 +0000 UTC,LastTransitionTime:2023-01-30 22:39:06 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-30 22:49:18 +0000 UTC,LastTransitionTime:2023-01-30 22:39:06 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-30 22:49:18 +0000 UTC,LastTransitionTime:2023-01-30 22:39:26 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.56.33,},NodeAddress{Type:ExternalIP,Address:54.233.226.185,},NodeAddress{Type:Hostname,Address:ip-172-20-56-33.sa-east-1.compute.internal,},NodeAddress{Type:InternalDNS,Address:ip-172-20-56-33.sa-east-1.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-54-233-226-185.sa-east-1.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec2d7bffa4e33f064a7a3db7aac73580,SystemUUID:ec2d7bff-a4e3-3f06-4a7a-3db7aac73580,BootID:749c0ee0-ccbf-48a5-9702-baf2673813b3,KernelVersion:5.15.0-1028-aws,OSImage:Ubuntu 22.04.1 LTS,ContainerRuntimeVersion:docker://20.10.17,KubeletVersion:v1.23.16,KubeProxyVersion:v1.23.16,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5 quay.io/cilium/cilium:v1.12.5],SizeBytes:456569528,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e k8s.gcr.io/e2e-test-images/agnhost:2.39],SizeBytes:126872991,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.23.16 registry.k8s.io/kube-proxy-amd64:v1.23.16],SizeBytes:110832791,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:f0c5de192d832e7c1daa6580d4a62e8fa6fc8eabc0917ae4cb7ed4d15e95b59e registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.1],SizeBytes:94471559,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:50249337,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:68d396900aeaa072c1f27289485fdac29834045a6f3ffe369bf389d830ef572d registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.6],SizeBytes:47910609,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:19582112,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:17611980,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 k8s.gcr.io/e2e-test-images/nginx:1.14-2],SizeBytes:16032814,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:1154361,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6 registry.k8s.io/pause:3.6],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 30 22:49:34.565: INFO: Logging kubelet events for node ip-172-20-56-33.sa-east-1.compute.internal Jan 30 22:49:34.710: INFO: Logging pods the kubelet thinks is on node ip-172-20-56-33.sa-east-1.compute.internal Jan 30 22:49:34.861: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-k9mvg started at 2023-01-30 22:43:43 +0000 UTC (0+1 container statuses recorded) Jan 30 22:49:34.861: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:49:34.861: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-rj6w6 started at 2023-01-30 22:43:43 +0000 UTC (0+1 container statuses recorded) Jan 30 22:49:34.861: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: false, restart count 0 Jan 30 22:49:34.861: INFO: ebs-csi-node-846kf started at 2023-01-30 22:39:07 +0000 UTC (0+3 container statuses recorded) Jan 30 22:49:34.861: INFO: Container ebs-plugin ready: true, restart count 0 Jan 30 22:49:34.861: INFO: Container liveness-probe ready: true, restart count 0 Jan 30 22:49:34.861: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 30 22:49:34.861: INFO: coredns-867df8f45c-txv2h started at 2023-01-30 22:39:26 +0000 UTC (0+1 container statuses recorded) Jan 30 22:49:34.861: INFO: Container coredns ready: true, restart count 0 Jan 30 22:49:34.861: INFO: service-proxy-disabled-tpm2q started at 2023-01-30 22:42:49 +0000 UTC (0+1 container statuses recorded) Jan 30 22:49:34.861: INFO: Container service-proxy-disabled ready: true, restart count 0 Jan 30 22:49:34.861: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-kl9wl started at 2023-01-30 22:43:43 +0000 UTC (0+1 container statuses recorded) Jan 30 22:49:34.861: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:49:34.861: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-ctp24 started at 2023-01-30 22:43:42 +0000 UTC (0+1 container statuses recorded) Jan 30 22:49:34.861: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:49:34.861: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-hx4t7 started at 2023-01-30 22:43:42 +0000 UTC (0+1 container statuses recorded) Jan 30 22:49:34.861: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:49:34.861: INFO: csi-hostpathplugin-0 started at 2023-01-30 22:49:23 +0000 UTC (0+7 container statuses recorded) Jan 30 22:49:34.861: INFO: Container csi-attacher ready: false, restart count 0 Jan 30 22:49:34.861: INFO: Container csi-provisioner ready: false, restart count 0 Jan 30 22:49:34.861: INFO: Container csi-resizer ready: false, restart count 0 Jan 30 22:49:34.861: INFO: Container csi-snapshotter ready: false, restart count 0 Jan 30 22:49:34.861: INFO: Container hostpath ready: false, restart count 0 Jan 30 22:49:34.861: INFO: Container liveness-probe ready: false, restart count 0 Jan 30 22:49:34.861: INFO: Container node-driver-registrar ready: false, restart count 0 Jan 30 22:49:34.861: INFO: csi-hostpathplugin-0 started at 2023-01-30 22:45:43 +0000 UTC (0+7 container statuses recorded) Jan 30 22:49:34.861: INFO: Container csi-attacher ready: false, restart count 0 Jan 30 22:49:34.861: INFO: Container csi-provisioner ready: false, restart count 0 Jan 30 22:49:34.861: INFO: Container csi-resizer ready: false, restart count 0 Jan 30 22:49:34.861: INFO: Container csi-snapshotter ready: false, restart count 0 Jan 30 22:49:34.861: INFO: Container hostpath ready: false, restart count 0 Jan 30 22:49:34.861: INFO: Container liveness-probe ready: false, restart count 0 Jan 30 22:49:34.861: INFO: Container node-driver-registrar ready: false, restart count 0 Jan 30 22:49:34.861: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-rqjpx started at 2023-01-30 22:43:42 +0000 UTC (0+1 container statuses recorded) Jan 30 22:49:34.861: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:49:34.861: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-fmwp2 started at 2023-01-30 22:43:42 +0000 UTC (0+1 container statuses recorded) Jan 30 22:49:34.861: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:49:34.861: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-g95mq started at 2023-01-30 22:43:42 +0000 UTC (0+1 container statuses recorded) Jan 30 22:49:34.861: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:49:34.861: INFO: cilium-rrh22 started at 2023-01-30 22:39:07 +0000 UTC (1+1 container statuses recorded) Jan 30 22:49:34.861: INFO: Init container clean-cilium-state ready: true, restart count 0 Jan 30 22:49:34.861: INFO: Container cilium-agent ready: true, restart count 0 Jan 30 22:49:34.861: INFO: coredns-autoscaler-557ccb4c66-vs6br started at 2023-01-30 22:39:26 +0000 UTC (0+1 container statuses recorded) Jan 30 22:49:34.861: INFO: Container autoscaler ready: true, restart count 0 Jan 30 22:49:34.861: INFO: fail-once-non-local-7gtm7 started at 2023-01-30 22:44:15 +0000 UTC (0+1 container statuses recorded) Jan 30 22:49:34.861: INFO: Container c ready: false, restart count 0 Jan 30 22:49:34.861: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-cxdvn started at 2023-01-30 22:43:42 +0000 UTC (0+1 container statuses recorded) Jan 30 22:49:34.861: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:49:34.861: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-z46zz started at 2023-01-30 22:43:42 +0000 UTC (0+1 container statuses recorded) Jan 30 22:49:34.861: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: false, restart count 0 Jan 30 22:49:34.861: INFO: ss2-0 started at 2023-01-30 22:42:49 +0000 UTC (0+1 container statuses recorded) Jan 30 22:49:34.861: INFO: Container webserver ready: true, restart count 0 Jan 30 22:49:34.861: INFO: fail-once-non-local-nvn9l started at 2023-01-30 22:43:53 +0000 UTC (0+1 container statuses recorded) Jan 30 22:49:34.861: INFO: Container c ready: false, restart count 0 Jan 30 22:49:34.861: INFO: fail-once-non-local-ksmfx started at 2023-01-30 22:43:53 +0000 UTC (0+1 container statuses recorded) Jan 30 22:49:34.861: INFO: Container c ready: false, restart count 0 Jan 30 22:49:35.572: INFO: Latency metrics for node ip-172-20-56-33.sa-east-1.compute.internal Jan 30 22:49:35.572: INFO: Logging node info for node ip-172-20-63-44.sa-east-1.compute.internal Jan 30 22:49:35.715: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-63-44.sa-east-1.compute.internal f7fcefff-e13d-4383-8796-cdc02ac9be26 4723 0 2023-01-30 22:37:10 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:c5.large beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:sa-east-1 failure-domain.beta.kubernetes.io/zone:sa-east-1a kops.k8s.io/kops-controller-pki: kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-63-44.sa-east-1.compute.internal kubernetes.io/os:linux kubernetes.io/role:master node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers: node.kubernetes.io/instance-type:c5.large topology.ebs.csi.aws.com/zone:sa-east-1a topology.kubernetes.io/region:sa-east-1 topology.kubernetes.io/zone:sa-east-1a] map[csi.volume.kubernetes.io/nodeid:{"ebs.csi.aws.com":"i-020b2e4354e67a776"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-30 22:37:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {protokube Update v1 2023-01-30 22:37:31 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kops.k8s.io/kops-controller-pki":{},"f:node-role.kubernetes.io/control-plane":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}} } {kops-controller Update v1 2023-01-30 22:37:33 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/master":{}}}} } {kube-controller-manager Update v1 2023-01-30 22:37:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.0.0/24\"":{}},"f:taints":{}}} } {kubelet Update v1 2023-01-30 22:38:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.0.0/24,DoNotUseExternalID:,ProviderID:aws:///sa-east-1a/i-020b2e4354e67a776,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[100.96.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{49753808896 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3862913024 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{44778427933 0} {<nil>} 44778427933 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3758055424 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-30 22:44:30 +0000 UTC,LastTransitionTime:2023-01-30 22:37:10 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-30 22:44:30 +0000 UTC,LastTransitionTime:2023-01-30 22:37:10 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-30 22:44:30 +0000 UTC,LastTransitionTime:2023-01-30 22:37:10 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-30 22:44:30 +0000 UTC,LastTransitionTime:2023-01-30 22:37:51 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.63.44,},NodeAddress{Type:ExternalIP,Address:18.230.69.200,},NodeAddress{Type:Hostname,Address:ip-172-20-63-44.sa-east-1.compute.internal,},NodeAddress{Type:InternalDNS,Address:ip-172-20-63-44.sa-east-1.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-18-230-69-200.sa-east-1.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec2866a629da92bef6391329a4d3d367,SystemUUID:ec2866a6-29da-92be-f639-1329a4d3d367,BootID:a943fe41-4bc4-4772-98e1-0ba5a25bcb7f,KernelVersion:5.15.0-1028-aws,OSImage:Ubuntu 22.04.1 LTS,ContainerRuntimeVersion:docker://20.10.17,KubeletVersion:v1.23.16,KubeProxyVersion:v1.23.16,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/etcdadm/etcd-manager@sha256:66a453db625abb268f4b3bbefc5a34a171d81e6e8796cecca54cfd71775c77c4 registry.k8s.io/etcdadm/etcd-manager:v3.0.20221209],SizeBytes:662209237,},ContainerImage{Names:[quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5 quay.io/cilium/cilium:v1.12.5],SizeBytes:456569528,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver-amd64:v1.23.16 registry.k8s.io/kube-apiserver-amd64:v1.23.16],SizeBytes:129999849,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager-amd64:v1.23.16 registry.k8s.io/kube-controller-manager-amd64:v1.23.16],SizeBytes:119940367,},ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.23.16 registry.k8s.io/kube-proxy-amd64:v1.23.16],SizeBytes:110832791,},ContainerImage{Names:[quay.io/cilium/operator@sha256:a6d24a006a6b92967ac90786b49bc1ac26e5477cf028cd1186efcfc2466484db quay.io/cilium/operator:v1.12.5],SizeBytes:110016956,},ContainerImage{Names:[registry.k8s.io/kops/kops-controller:1.26.0-beta.2],SizeBytes:106139107,},ContainerImage{Names:[registry.k8s.io/kops/dns-controller:1.26.0-beta.2],SizeBytes:102637092,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:f0c5de192d832e7c1daa6580d4a62e8fa6fc8eabc0917ae4cb7ed4d15e95b59e registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.1],SizeBytes:94471559,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:122bfb8c1edabb3c0edd63f06523e6940d958d19b3957dc7b1d6f81e9f1f6119 registry.k8s.io/sig-storage/csi-provisioner:v3.1.0],SizeBytes:57736070,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:9ebbf9f023e7b41ccee3d52afe39a89e3ddacdbb69269d583abfc25847cfd9e4 registry.k8s.io/sig-storage/csi-resizer:v1.4.0],SizeBytes:55464778,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:8b9c313c05f54fb04f8d430896f5f5904b6cb157df261501b29adc04d2b2dc7b registry.k8s.io/sig-storage/csi-attacher:v3.4.0],SizeBytes:54848595,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler-amd64:v1.23.16 registry.k8s.io/kube-scheduler-amd64:v1.23.16],SizeBytes:51852546,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:19582112,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:17611980,},ContainerImage{Names:[registry.k8s.io/kops/kube-apiserver-healthcheck:1.26.0-beta.2],SizeBytes:8786911,},ContainerImage{Names:[registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause:3.6],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 30 22:49:35.715: INFO: Logging kubelet events for node ip-172-20-63-44.sa-east-1.compute.internal Jan 30 22:49:35.864: INFO: Logging pods the kubelet thinks is on node ip-172-20-63-44.sa-east-1.compute.internal Jan 30 22:49:36.012: INFO: etcd-manager-events-ip-172-20-63-44.sa-east-1.compute.internal started at 2023-01-30 22:36:20 +0000 UTC (0+1 container statuses recorded) Jan 30 22:49:36.012: INFO: Container etcd-manager ready: true, restart count 0 Jan 30 22:49:36.012: INFO: etcd-manager-main-ip-172-20-63-44.sa-east-1.compute.internal started at 2023-01-30 22:36:20 +0000 UTC (0+1 container statuses recorded) Jan 30 22:49:36.013: INFO: Container etcd-manager ready: true, restart count 0 Jan 30 22:49:36.013: INFO: kube-scheduler-ip-172-20-63-44.sa-east-1.compute.internal started at 2023-01-30 22:36:20 +0000 UTC (0+1 container statuses recorded) Jan 30 22:49:36.013: INFO: Container kube-scheduler ready: true, restart count 0 Jan 30 22:49:36.013: INFO: ebs-csi-node-crhx2 started at 2023-01-30 22:37:30 +0000 UTC (0+3 container statuses recorded) Jan 30 22:49:36.013: INFO: Container ebs-plugin ready: true, restart count 0 Jan 30 22:49:36.013: INFO: Container liveness-probe ready: true, restart count 0 Jan 30 22:49:36.013: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 30 22:49:36.013: INFO: cilium-bg2hw started at 2023-01-30 22:37:30 +0000 UTC (1+1 container statuses recorded) Jan 30 22:49:36.013: INFO: Init container clean-cilium-state ready: true, restart count 0 Jan 30 22:49:36.013: INFO: Container cilium-agent ready: true, restart count 0 Jan 30 22:49:36.013: INFO: kube-apiserver-ip-172-20-63-44.sa-east-1.compute.internal started at 2023-01-30 22:36:20 +0000 UTC (0+2 container statuses recorded) Jan 30 22:49:36.013: INFO: Container healthcheck ready: true, restart count 0 Jan 30 22:49:36.013: INFO: Container kube-apiserver ready: true, restart count 1 Jan 30 22:49:36.013: INFO: kube-controller-manager-ip-172-20-63-44.sa-east-1.compute.internal started at 2023-01-30 22:36:20 +0000 UTC (0+1 container statuses recorded) Jan 30 22:49:36.013: INFO: Container kube-controller-manager ready: true, restart count 2 Jan 30 22:49:36.013: INFO: kops-controller-mrlzz started at 2023-01-30 22:37:31 +0000 UTC (0+1 container statuses recorded) Jan 30 22:49:36.013: INFO: Container kops-controller ready: true, restart count 0 Jan 30 22:49:36.013: INFO: dns-controller-58d7bbb845-vwkl6 started at 2023-01-30 22:37:32 +0000 UTC (0+1 container statuses recorded) Jan 30 22:49:36.013: INFO: Container dns-controller ready: true, restart count 0 Jan 30 22:49:36.013: INFO: cilium-operator-c7bfc9f44-bhw9j started at 2023-01-30 22:37:32 +0000 UTC (0+1 container statuses recorded) Jan 30 22:49:36.013: INFO: Container cilium-operator ready: true, restart count 0 Jan 30 22:49:36.013: INFO: ebs-csi-controller-6dbc9bb9b4-zt6h6 started at 2023-01-30 22:37:32 +0000 UTC (0+5 container statuses recorded) Jan 30 22:49:36.013: INFO: Container csi-attacher ready: true, restart count 0 Jan 30 22:49:36.013: INFO: Container csi-provisioner ready: true, restart count 0 Jan 30 22:49:36.013: INFO: Container csi-resizer ready: true, restart count 0 Jan 30 22:49:36.013: INFO: Container ebs-plugin ready: true, restart count 0 Jan 30 22:49:36.013: INFO: Container liveness-probe ready: true, restart count 0 Jan 30 22:49:36.463: INFO: Latency metrics for node ip-172-20-63-44.sa-east-1.compute.internal Jan 30 22:49:36.463: INFO: Logging node info for node ip-172-20-63-7.sa-east-1.compute.internal Jan 30 22:49:36.605: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-63-7.sa-east-1.compute.internal 8ee09ce8-ad2c-4347-b6b0-a38439fe8b38 6437 0 2023-01-30 22:39:06 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:sa-east-1 failure-domain.beta.kubernetes.io/zone:sa-east-1a kubelet_cleanup:true kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-63-7.sa-east-1.compute.internal kubernetes.io/os:linux kubernetes.io/role:node node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:sa-east-1a topology.hostpath.csi/node:ip-172-20-63-7.sa-east-1.compute.internal topology.kubernetes.io/region:sa-east-1 topology.kubernetes.io/zone:sa-east-1a] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-ephemeral-5132":"ip-172-20-63-7.sa-east-1.compute.internal","csi-mock-csi-mock-volumes-6130":"csi-mock-csi-mock-volumes-6130","ebs.csi.aws.com":"i-02d1af952f8cb9055"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kops-controller Update v1 2023-01-30 22:39:06 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/node":{}}}} } {kube-controller-manager Update v1 2023-01-30 22:39:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.2.0/24\"":{}}}} } {kubelet Update v1 2023-01-30 22:39:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {e2e.test Update v1 2023-01-30 22:43:42 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubelet_cleanup":{}}}} } {kube-controller-manager Update v1 2023-01-30 22:44:35 +0000 UTC FieldsV1 {"f:status":{"f:volumesAttached":{}}} status} {kubelet Update v1 2023-01-30 22:45:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.2.0/24,DoNotUseExternalID:,ProviderID:aws:///sa-east-1a/i-02d1af952f8cb9055,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{49753808896 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4051644416 0} {<nil>} 3956684Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{44778427933 0} {<nil>} 44778427933 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3946786816 0} {<nil>} 3854284Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-30 22:49:00 +0000 UTC,LastTransitionTime:2023-01-30 22:39:06 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-30 22:49:00 +0000 UTC,LastTransitionTime:2023-01-30 22:39:06 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-30 22:49:00 +0000 UTC,LastTransitionTime:2023-01-30 22:39:06 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-30 22:49:00 +0000 UTC,LastTransitionTime:2023-01-30 22:39:27 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.63.7,},NodeAddress{Type:ExternalIP,Address:52.67.57.31,},NodeAddress{Type:Hostname,Address:ip-172-20-63-7.sa-east-1.compute.internal,},NodeAddress{Type:InternalDNS,Address:ip-172-20-63-7.sa-east-1.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-52-67-57-31.sa-east-1.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec292dc0bb9ad655da1bd5cf4f054caa,SystemUUID:ec292dc0-bb9a-d655-da1b-d5cf4f054caa,BootID:3aa9a5e0-6628-460f-859b-942e6b19dc1d,KernelVersion:5.15.0-1028-aws,OSImage:Ubuntu 22.04.1 LTS,ContainerRuntimeVersion:docker://20.10.17,KubeletVersion:v1.23.16,KubeProxyVersion:v1.23.16,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5 quay.io/cilium/cilium:v1.12.5],SizeBytes:456569528,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e k8s.gcr.io/e2e-test-images/agnhost:2.39],SizeBytes:126872991,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.23.16 registry.k8s.io/kube-proxy-amd64:v1.23.16],SizeBytes:110832791,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:f0c5de192d832e7c1daa6580d4a62e8fa6fc8eabc0917ae4cb7ed4d15e95b59e registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.1],SizeBytes:94471559,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0],SizeBytes:56382329,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:4e74c0492bceddc598de1c90cc5bc14dcda94cb49fa9c5bad9d117c4834b5e08 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.1],SizeBytes:56379269,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:36c31f7e1f433c9634d24f876353e8646246d81a03c4e351202c2644daff1620 k8s.gcr.io/sig-storage/csi-resizer:v1.2.0],SizeBytes:53956308,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-snapshotter@sha256:ed98431376c9e944e19a465fe8ea944806714dd95416a0821096c78d66b579bd k8s.gcr.io/sig-storage/csi-snapshotter:v4.1.1],SizeBytes:53531990,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:c5be65d6679efabb969d9b019300d187437ae876f992c40911fd2892bbef3b36 k8s.gcr.io/sig-storage/csi-attacher:v3.2.0],SizeBytes:53500826,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:60ab9b3e6a030d3038c87c0d6bca2930f58d1d72823e6a4af09767dc83b696a2 k8s.gcr.io/sig-storage/csi-attacher:v3.2.1],SizeBytes:53499570,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:7a5ba58a44e0d749e0767e4e37315bcf6a61f33ce3185c1991848af4db0fb70a k8s.gcr.io/sig-storage/csi-resizer:v1.1.0],SizeBytes:49176472,},ContainerImage{Names:[k8s.gcr.io/sig-storage/hostpathplugin@sha256:232fe80174d60d520d36043103853a1d7ab4b7f3782cf43e45034f04ccda58ce k8s.gcr.io/sig-storage/hostpathplugin:v1.7.1],SizeBytes:29977921,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:19662887,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:19582112,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:2dee3fe5fe861bb66c3a4ac51114f3447a4cd35870e0f2e2b558c7a400d89589 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0],SizeBytes:18699436,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:17680993,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:17611980,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:529be2c9770add0cdd0c989115222ea9fc1be430c11095eb9f6dafcf98a36e2b k8s.gcr.io/sig-storage/livenessprobe:v2.4.0],SizeBytes:17175488,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:1154361,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6 registry.k8s.io/pause:3.6],SizeBytes:682696,},},VolumesInUse:[kubernetes.io/csi/csi-mock-csi-mock-volumes-6130^4 kubernetes.io/csi/ebs.csi.aws.com^vol-0f9eeed3999f6acd3],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-mock-csi-mock-volumes-6130^4,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/ebs.csi.aws.com^vol-0f9eeed3999f6acd3,DevicePath:,},},Config:nil,},} Jan 30 22:49:36.606: INFO: Logging kubelet events for node ip-172-20-63-7.sa-east-1.compute.internal Jan 30 22:49:36.754: INFO: Logging pods the kubelet thinks is on node ip-172-20-63-7.sa-east-1.compute.internal Jan 30 22:49:36.909: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-nft6k started at 2023-01-30 22:43:42 +0000 UTC (0+1 container statuses recorded) Jan 30 22:49:36.909: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:49:36.909: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-zrntw started at 2023-01-30 22:43:42 +0000 UTC (0+1 container statuses recorded) Jan 30 22:49:36.909: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:49:36.909: INFO: pod-c28eace7-f9af-4aa6-896f-40a90618d6c5 started at 2023-01-30 22:45:23 +0000 UTC (0+1 container statuses recorded) Jan 30 22:49:36.909: INFO: Container write-pod ready: false, restart count 0 Jan 30 22:49:36.909: INFO: verify-service-up-host-exec-pod started at 2023-01-30 22:45:07 +0000 UTC (0+1 container statuses recorded) Jan 30 22:49:36.909: INFO: Container agnhost-container ready: true, restart count 0 Jan 30 22:49:36.909: INFO: csi-mockplugin-resizer-0 started at 2023-01-30 22:42:54 +0000 UTC (0+1 container statuses recorded) Jan 30 22:49:36.909: INFO: Container csi-resizer ready: true, restart count 0 Jan 30 22:49:36.909: INFO: pod-d8cff309-3d6a-4ce5-9ac9-b57de7155461 started at 2023-01-30 22:45:48 +0000 UTC (0+1 container statuses recorded) Jan 30 22:49:36.909: INFO: Container write-pod ready: false, restart count 0 Jan 30 22:49:36.909: INFO: csi-mockplugin-attacher-0 started at 2023-01-30 22:49:11 +0000 UTC (0+1 container statuses recorded) Jan 30 22:49:36.909: INFO: Container csi-attacher ready: false, restart count 0 Jan 30 22:49:36.909: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-fp6pt started at 2023-01-30 22:43:42 +0000 UTC (0+1 container statuses recorded) Jan 30 22:49:36.909: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: false, restart count 0 Jan 30 22:49:36.909: INFO: pvc-volume-tester-ztxmz started at 2023-01-30 22:44:35 +0000 UTC (0+1 container statuses recorded) Jan 30 22:49:36.909: INFO: Container volume-tester ready: false, restart count 0 Jan 30 22:49:36.909: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-qbtpc started at 2023-01-30 22:43:43 +0000 UTC (0+1 container statuses recorded) Jan 30 22:49:36.909: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:49:36.909: INFO: inline-volume-tester-62nrc started at 2023-01-30 22:43:42 +0000 UTC (0+1 container statuses recorded) Jan 30 22:49:36.909: INFO: Container csi-volume-tester ready: false, restart count 0 Jan 30 22:49:36.909: INFO: externalsvc-c5qz7 started at 2023-01-30 22:49:02 +0000 UTC (0+1 container statuses recorded) Jan 30 22:49:36.909: INFO: Container externalsvc ready: false, restart count 0 Jan 30 22:49:36.909: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-fq2r6 started at 2023-01-30 22:43:42 +0000 UTC (0+1 container statuses recorded) Jan 30 22:49:36.909: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:49:36.909: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-v8ln8 started at 2023-01-30 22:43:43 +0000 UTC (0+1 container statuses recorded) Jan 30 22:49:36.909: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:49:36.909: INFO: service-proxy-disabled-4zt6p started at 2023-01-30 22:42:48 +0000 UTC (0+1 container statuses recorded) Jan 30 22:49:36.909: INFO: Container service-proxy-disabled ready: true, restart count 0 Jan 30 22:49:36.909: INFO: pod-projected-configmaps-a0769f34-6dfb-4a2c-80f6-78bd453780c5 started at 2023-01-30 22:44:32 +0000 UTC (0+1 container statuses recorded) Jan 30 22:49:36.909: INFO: Container agnhost-container ready: false, restart count 0 Jan 30 22:49:36.909: INFO: csi-mockplugin-0 started at 2023-01-30 22:42:54 +0000 UTC (0+3 container statuses recorded) Jan 30 22:49:36.909: INFO: Container csi-provisioner ready: true, restart count 0 Jan 30 22:49:36.909: INFO: Container driver-registrar ready: true, restart count 0 Jan 30 22:49:36.909: INFO: Container mock ready: true, restart count 0 Jan 30 22:49:36.909: INFO: service-proxy-toggled-bpxnj started at 2023-01-30 22:43:22 +0000 UTC (0+1 container statuses recorded) Jan 30 22:49:36.909: INFO: Container service-proxy-toggled ready: true, restart count 0 Jan 30 22:49:36.909: INFO: csi-mockplugin-0 started at 2023-01-30 22:49:11 +0000 UTC (0+3 container statuses recorded) Jan 30 22:49:36.909: INFO: Container csi-provisioner ready: false, restart count 0 Jan 30 22:49:36.909: INFO: Container driver-registrar ready: false, restart count 0 Jan 30 22:49:36.909: INFO: Container mock ready: false, restart count 0 Jan 30 22:49:36.909: INFO: pod-handle-http-request started at 2023-01-30 22:44:58 +0000 UTC (0+1 container statuses recorded) Jan 30 22:49:36.909: INFO: Container agnhost-container ready: false, restart count 0 Jan 30 22:49:36.909: INFO: pod-subpath-test-preprovisionedpv-5bn8 started at 2023-01-30 22:44:30 +0000 UTC (1+1 container statuses recorded) Jan 30 22:49:36.909: INFO: Init container init-volume-preprovisionedpv-5bn8 ready: false, restart count 0 Jan 30 22:49:36.909: INFO: Container test-container-subpath-preprovisionedpv-5bn8 ready: false, restart count 0 Jan 30 22:49:36.909: INFO: startup-d0748011-46a8-4bb4-9fe0-3c4baf5fbfed started at 2023-01-30 22:49:08 +0000 UTC (0+1 container statuses recorded) Jan 30 22:49:36.909: INFO: Container busybox ready: false, restart count 0 Jan 30 22:49:36.909: INFO: csi-mockplugin-attacher-0 started at 2023-01-30 22:42:54 +0000 UTC (0+1 container statuses recorded) Jan 30 22:49:36.909: INFO: Container csi-attacher ready: true, restart count 0 Jan 30 22:49:36.909: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-cnpbk started at 2023-01-30 22:43:42 +0000 UTC (0+1 container statuses recorded) Jan 30 22:49:36.909: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: false, restart count 0 Jan 30 22:49:36.909: INFO: hostexec-ip-172-20-63-7.sa-east-1.compute.internal-l6ph6 started at 2023-01-30 22:44:13 +0000 UTC (0+1 container statuses recorded) Jan 30 22:49:36.909: INFO: Container agnhost-container ready: true, restart count 0 Jan 30 22:49:36.909: INFO: ebs-csi-node-wc6gx started at 2023-01-30 22:39:07 +0000 UTC (0+3 container statuses recorded) Jan 30 22:49:36.909: INFO: Container ebs-plugin ready: true, restart count 0 Jan 30 22:49:36.909: INFO: Container liveness-probe ready: true, restart count 0 Jan 30 22:49:36.909: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 30 22:49:36.909: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-9t9kb started at 2023-01-30 22:43:42 +0000 UTC (0+1 container statuses recorded) Jan 30 22:49:36.909: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:49:36.909: INFO: ss2-1 started at 2023-01-30 22:45:23 +0000 UTC (0+1 container statuses recorded) Jan 30 22:49:36.909: INFO: Container webserver ready: false, restart count 0 Jan 30 22:49:36.909: INFO: rs-4k8s4 started at 2023-01-30 22:49:21 +0000 UTC (0+1 container statuses recorded) Jan 30 22:49:36.909: INFO: Container donothing ready: false, restart count 0 Jan 30 22:49:36.909: INFO: csi-hostpathplugin-0 started at 2023-01-30 22:43:41 +0000 UTC (0+7 container statuses recorded) Jan 30 22:49:36.909: INFO: Container csi-attacher ready: true, restart count 0 Jan 30 22:49:36.909: INFO: Container csi-provisioner ready: true, restart count 0 Jan 30 22:49:36.909: INFO: Container csi-resizer ready: true, restart count 0 Jan 30 22:49:36.909: INFO: Container csi-snapshotter ready: true, restart count 0 Jan 30 22:49:36.909: INFO: Container hostpath ready: true, restart count 0 Jan 30 22:49:36.909: INFO: Container liveness-probe ready: true, restart count 0 Jan 30 22:49:36.909: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 30 22:49:36.909: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-56dt8 started at 2023-01-30 22:43:42 +0000 UTC (0+1 container statuses recorded) Jan 30 22:49:36.909: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: false, restart count 0 Jan 30 22:49:36.909: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-2m4f6 started at 2023-01-30 22:43:43 +0000 UTC (0+1 container statuses recorded) Jan 30 22:49:36.909: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: false, restart count 0 Jan 30 22:49:36.909: INFO: hostexec-ip-172-20-63-7.sa-east-1.compute.internal-lx82t started at 2023-01-30 22:45:16 +0000 UTC (0+1 container statuses recorded) Jan 30 22:49:36.909: INFO: Container agnhost-container ready: true, restart count 0 Jan 30 22:49:36.909: INFO: pod-834b9c41-ef01-4e0b-b9fc-0f86ce4cb398 started at 2023-01-30 22:48:58 +0000 UTC (0+1 container statuses recorded) Jan 30 22:49:36.909: INFO: Container write-pod ready: false, restart count 0 Jan 30 22:49:36.909: INFO: cilium-qtf8x started at 2023-01-30 22:39:07 +0000 UTC (1+1 container statuses recorded) Jan 30 22:49:36.909: INFO: Init container clean-cilium-state ready: true, restart count 0 Jan 30 22:49:36.909: INFO: Container cilium-agent ready: true, restart count 0 Jan 30 22:49:36.909: INFO: hostexec-ip-172-20-63-7.sa-east-1.compute.internal-grfw9 started at 2023-01-30 22:45:37 +0000 UTC (0+1 container statuses recorded) Jan 30 22:49:36.909: INFO: Container agnhost-container ready: true, restart count 0 Jan 30 22:49:36.909: INFO: verify-service-up-exec-pod-p5sdh started at 2023-01-30 22:45:14 +0000 UTC (0+1 container statuses recorded) Jan 30 22:49:36.909: INFO: Container agnhost-container ready: false, restart count 0 Jan 30 22:49:37.815: INFO: Latency metrics for node ip-172-20-63-7.sa-east-1.compute.internal Jan 30 22:49:37.816: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "configmap-6262" for this suite.
Find pod-configmaps-4b63007d-dec1-4e76-a10d-e7bfb44a077f mentions in log files
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-node\]\sContainer\sLifecycle\sHook\swhen\screate\sa\spod\swith\slifecycle\shook\sshould\sexecute\spoststart\shttp\shook\sproperly\s\[NodeConformance\]\s\[Conformance\]$'
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:53 Jan 30 22:49:58.782: Unexpected error: <*errors.errorString | 0xc000264240>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:107from junit_18.xml
[BeforeEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 30 22:44:57.064: INFO: >>> kubeConfig: /root/.kube/config �[1mSTEP�[0m: Building a namespace api object, basename container-lifecycle-hook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:53 �[1mSTEP�[0m: create the container to handle the HTTPGet hook request. Jan 30 22:44:58.494: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:45:00.638: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:45:02.638: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:45:04.638: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:45:06.638: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:45:08.639: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:45:10.644: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:45:12.638: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:45:14.639: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:45:16.637: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:45:18.638: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:45:20.638: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:45:22.638: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:45:24.639: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:45:26.638: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:45:28.638: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:45:30.639: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:45:32.638: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:45:34.639: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:45:36.638: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:45:38.637: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:45:40.637: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:45:42.638: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:45:44.639: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:45:46.637: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:45:48.638: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:45:50.638: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:45:52.638: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:45:54.639: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:45:56.638: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:45:58.637: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:46:00.638: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:46:02.638: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:46:04.638: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:46:06.638: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:46:08.639: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:46:10.638: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:46:12.638: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:46:14.639: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:46:16.638: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:46:18.638: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:46:20.638: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:46:22.638: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:46:24.638: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:46:26.637: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:46:28.639: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:46:30.638: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:46:32.638: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:46:34.638: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:46:36.638: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:46:38.639: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:46:40.638: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:46:42.638: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:46:44.638: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:46:46.638: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:46:48.640: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:46:50.639: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:46:52.638: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:46:54.639: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:46:56.638: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:46:58.638: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:47:00.638: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:47:02.638: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:47:04.638: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:47:06.638: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:47:08.638: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:47:10.639: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:47:12.638: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:47:14.638: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:47:16.638: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:47:18.638: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:47:20.638: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:47:22.638: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:47:24.638: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:47:26.638: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:47:28.638: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:47:30.638: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:47:32.638: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:47:34.637: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:47:36.638: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:47:38.637: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:47:40.640: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:47:42.639: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:47:44.639: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:47:46.638: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:47:48.638: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:47:50.639: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:47:52.638: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:47:54.638: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:47:56.638: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:47:58.638: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:48:00.638: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:48:02.638: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:48:04.639: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:48:06.638: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:48:08.639: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:48:10.638: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:48:12.638: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:48:14.638: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:48:16.638: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:48:18.638: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:48:20.639: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:48:22.638: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:48:24.638: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:48:26.638: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:48:28.638: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:48:30.639: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:48:32.639: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:48:34.640: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:48:36.638: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:48:38.639: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:48:40.638: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:48:42.638: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:48:44.638: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:48:46.642: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:48:48.638: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:48:50.637: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:48:52.639: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:48:54.639: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:48:56.640: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:48:58.638: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:49:00.638: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:49:02.637: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:49:04.638: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:49:06.637: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:49:08.640: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:49:10.640: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:49:12.638: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:49:14.638: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:49:16.642: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:49:18.638: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:49:20.637: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:49:22.640: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:49:24.639: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:49:26.639: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:49:28.640: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:49:30.638: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:49:32.638: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:49:34.638: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:49:36.639: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:49:38.638: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:49:40.640: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:49:42.639: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:49:44.639: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:49:46.638: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:49:48.643: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:49:50.639: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:49:52.638: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:49:54.638: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:49:56.638: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:49:58.639: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:49:58.781: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 30 22:49:58.782: FAIL: Unexpected error: <*errors.errorString | 0xc000264240>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*PodClient).CreateSync(0xc001eb7e90, 0x0?) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:107 +0x94 k8s.io/kubernetes/test/e2e/common/node.glob..func12.1.1() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:63 +0x3da k8s.io/kubernetes/test/e2e.RunE2ETests(0x24eec17?) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:133 +0x6b7 k8s.io/kubernetes/test/e2e.TestE2E(0x24602d9?) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc000c8bd40, 0x72ecb90) /usr/local/go/src/testing/testing.go:1446 +0x10b created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1493 +0x35f [AfterEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 �[1mSTEP�[0m: Collecting events from namespace "container-lifecycle-hook-3887". �[1mSTEP�[0m: Found 12 events. Jan 30 22:49:58.925: INFO: At 2023-01-30 22:44:58 +0000 UTC - event for pod-handle-http-request: {default-scheduler } Scheduled: Successfully assigned container-lifecycle-hook-3887/pod-handle-http-request to ip-172-20-63-7.sa-east-1.compute.internal Jan 30 22:49:58.925: INFO: At 2023-01-30 22:45:02 +0000 UTC - event for pod-handle-http-request: {kubelet ip-172-20-63-7.sa-east-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "9a1299f292578ac76fa685b230c1aa0f8c79b391c8338aaa54503faa0918834f" network for pod "pod-handle-http-request": networkPlugin cni failed to set up pod "pod-handle-http-request_container-lifecycle-hook-3887" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 30 22:49:58.925: INFO: At 2023-01-30 22:45:05 +0000 UTC - event for pod-handle-http-request: {kubelet ip-172-20-63-7.sa-east-1.compute.internal} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 22:49:58.925: INFO: At 2023-01-30 22:45:08 +0000 UTC - event for pod-handle-http-request: {kubelet ip-172-20-63-7.sa-east-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "b7d5f3a320f5115981acc1a564f1a2c7c2d0ef031ba0b251cd2255fbbd06219b" network for pod "pod-handle-http-request": networkPlugin cni failed to set up pod "pod-handle-http-request_container-lifecycle-hook-3887" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 30 22:49:58.925: INFO: At 2023-01-30 22:45:12 +0000 UTC - event for pod-handle-http-request: {kubelet ip-172-20-63-7.sa-east-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "c5cfd8491f09577c446ec3f95b5e986e3851e972a03000e27c43440bf2871872" network for pod "pod-handle-http-request": networkPlugin cni failed to set up pod "pod-handle-http-request_container-lifecycle-hook-3887" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 30 22:49:58.925: INFO: At 2023-01-30 22:45:16 +0000 UTC - event for pod-handle-http-request: {kubelet ip-172-20-63-7.sa-east-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "d5fe11ad36e47c9bbd3159e860440ef70a2c12bd7bbbd5f3bf336b4f116f29de" network for pod "pod-handle-http-request": networkPlugin cni failed to set up pod "pod-handle-http-request_container-lifecycle-hook-3887" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 30 22:49:58.925: INFO: At 2023-01-30 22:45:20 +0000 UTC - event for pod-handle-http-request: {kubelet ip-172-20-63-7.sa-east-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "1fd3389897043e77c8f0d578c450134752a73e3a092d2a68d51826b0f7e19253" network for pod "pod-handle-http-request": networkPlugin cni failed to set up pod "pod-handle-http-request_container-lifecycle-hook-3887" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 30 22:49:58.925: INFO: At 2023-01-30 22:45:27 +0000 UTC - event for pod-handle-http-request: {kubelet ip-172-20-63-7.sa-east-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "f538904f18ae4133361c2249b0218eba09413239cef19dabf1707bde96104b90" network for pod "pod-handle-http-request": networkPlugin cni failed to set up pod "pod-handle-http-request_container-lifecycle-hook-3887" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 30 22:49:58.925: INFO: At 2023-01-30 22:45:31 +0000 UTC - event for pod-handle-http-request: {kubelet ip-172-20-63-7.sa-east-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "c80bf33db799d63e1834ff7e4cf34bf3183c299571e51ddd610dbc030c27aeb6" network for pod "pod-handle-http-request": networkPlugin cni failed to set up pod "pod-handle-http-request_container-lifecycle-hook-3887" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 30 22:49:58.925: INFO: At 2023-01-30 22:45:37 +0000 UTC - event for pod-handle-http-request: {kubelet ip-172-20-63-7.sa-east-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "bdc1b6d2935f2dee985080bbb411d0181d96ec817d1f907ae7c0dead57f37670" network for pod "pod-handle-http-request": networkPlugin cni failed to set up pod "pod-handle-http-request_container-lifecycle-hook-3887" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 30 22:49:58.925: INFO: At 2023-01-30 22:45:43 +0000 UTC - event for pod-handle-http-request: {kubelet ip-172-20-63-7.sa-east-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "d7983fe4fa2ccd73b7eb09954505db1636226201180d6760ea5a81b289803e22" network for pod "pod-handle-http-request": networkPlugin cni failed to set up pod "pod-handle-http-request_container-lifecycle-hook-3887" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 30 22:49:58.925: INFO: At 2023-01-30 22:45:48 +0000 UTC - event for pod-handle-http-request: {kubelet ip-172-20-63-7.sa-east-1.compute.internal} FailedCreatePodSandBox: (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "d9b8e5355ecbfdca6f9b3ad1960d9b06c63c20af0f3fba3192a1898eaf29b3df" network for pod "pod-handle-http-request": networkPlugin cni failed to set up pod "pod-handle-http-request_container-lifecycle-hook-3887" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 30 22:49:59.067: INFO: POD NODE PHASE GRACE CONDITIONS Jan 30 22:49:59.067: INFO: pod-handle-http-request ip-172-20-63-7.sa-east-1.compute.internal Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 22:44:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 22:44:58 +0000 UTC ContainersNotReady containers with unready status: [agnhost-container]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 22:44:58 +0000 UTC ContainersNotReady containers with unready status: [agnhost-container]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 22:44:58 +0000 UTC }] Jan 30 22:49:59.068: INFO: Jan 30 22:49:59.215: INFO: Unable to fetch container-lifecycle-hook-3887/pod-handle-http-request/agnhost-container logs: the server rejected our request for an unknown reason (get pods pod-handle-http-request) Jan 30 22:49:59.361: INFO: Logging node info for node ip-172-20-37-244.sa-east-1.compute.internal Jan 30 22:49:59.503: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-37-244.sa-east-1.compute.internal 1be0c21f-5cd5-49c3-937b-dcb7d30e890a 6805 0 2023-01-30 22:39:08 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:sa-east-1 failure-domain.beta.kubernetes.io/zone:sa-east-1a kubelet_cleanup:true kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-37-244.sa-east-1.compute.internal kubernetes.io/os:linux kubernetes.io/role:node node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:sa-east-1a topology.hostpath.csi/node:ip-172-20-37-244.sa-east-1.compute.internal topology.kubernetes.io/region:sa-east-1 topology.kubernetes.io/zone:sa-east-1a] map[csi.volume.kubernetes.io/nodeid:{"ebs.csi.aws.com":"i-02de6750f6f07da4c"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kops-controller Update v1 2023-01-30 22:39:08 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/node":{}}}} } {kube-controller-manager Update v1 2023-01-30 22:39:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.4.0/24\"":{}}}} } {kubelet Update v1 2023-01-30 22:39:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {e2e.test Update v1 2023-01-30 22:43:41 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubelet_cleanup":{}}}} } {kubelet Update v1 2023-01-30 22:49:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.4.0/24,DoNotUseExternalID:,ProviderID:aws:///sa-east-1a/i-02de6750f6f07da4c,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{49753808896 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4051644416 0} {<nil>} 3956684Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{44778427933 0} {<nil>} 44778427933 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3946786816 0} {<nil>} 3854284Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-30 22:49:11 +0000 UTC,LastTransitionTime:2023-01-30 22:39:08 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-30 22:49:11 +0000 UTC,LastTransitionTime:2023-01-30 22:39:08 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-30 22:49:11 +0000 UTC,LastTransitionTime:2023-01-30 22:39:08 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-30 22:49:11 +0000 UTC,LastTransitionTime:2023-01-30 22:39:28 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.37.244,},NodeAddress{Type:ExternalIP,Address:54.232.162.137,},NodeAddress{Type:Hostname,Address:ip-172-20-37-244.sa-east-1.compute.internal,},NodeAddress{Type:InternalDNS,Address:ip-172-20-37-244.sa-east-1.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-54-232-162-137.sa-east-1.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec2350fb0335a8c0068ce4bddeab7362,SystemUUID:ec2350fb-0335-a8c0-068c-e4bddeab7362,BootID:80522224-50f0-4d12-bc36-a8ad10d0e9d2,KernelVersion:5.15.0-1028-aws,OSImage:Ubuntu 22.04.1 LTS,ContainerRuntimeVersion:docker://20.10.17,KubeletVersion:v1.23.16,KubeProxyVersion:v1.23.16,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5 quay.io/cilium/cilium:v1.12.5],SizeBytes:456569528,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e k8s.gcr.io/e2e-test-images/agnhost:2.39],SizeBytes:126872991,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.23.16 registry.k8s.io/kube-proxy-amd64:v1.23.16],SizeBytes:110832791,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:f0c5de192d832e7c1daa6580d4a62e8fa6fc8eabc0917ae4cb7ed4d15e95b59e registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.1],SizeBytes:94471559,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:4e74c0492bceddc598de1c90cc5bc14dcda94cb49fa9c5bad9d117c4834b5e08 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.1],SizeBytes:56379269,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:36c31f7e1f433c9634d24f876353e8646246d81a03c4e351202c2644daff1620 k8s.gcr.io/sig-storage/csi-resizer:v1.2.0],SizeBytes:53956308,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-snapshotter@sha256:ed98431376c9e944e19a465fe8ea944806714dd95416a0821096c78d66b579bd k8s.gcr.io/sig-storage/csi-snapshotter:v4.1.1],SizeBytes:53531990,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:60ab9b3e6a030d3038c87c0d6bca2930f58d1d72823e6a4af09767dc83b696a2 k8s.gcr.io/sig-storage/csi-attacher:v3.2.1],SizeBytes:53499570,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:50249337,},ContainerImage{Names:[k8s.gcr.io/sig-storage/hostpathplugin@sha256:232fe80174d60d520d36043103853a1d7ab4b7f3782cf43e45034f04ccda58ce k8s.gcr.io/sig-storage/hostpathplugin:v1.7.1],SizeBytes:29977921,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:19582112,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:2dee3fe5fe861bb66c3a4ac51114f3447a4cd35870e0f2e2b558c7a400d89589 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0],SizeBytes:18699436,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:17611980,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:529be2c9770add0cdd0c989115222ea9fc1be430c11095eb9f6dafcf98a36e2b k8s.gcr.io/sig-storage/livenessprobe:v2.4.0],SizeBytes:17175488,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:1154361,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6 registry.k8s.io/pause:3.6],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 30 22:49:59.504: INFO: Logging kubelet events for node ip-172-20-37-244.sa-east-1.compute.internal Jan 30 22:49:59.650: INFO: Logging pods the kubelet thinks is on node ip-172-20-37-244.sa-east-1.compute.internal Jan 30 22:49:59.941: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-x4sjr started at 2023-01-30 22:43:43 +0000 UTC (0+1 container statuses recorded) Jan 30 22:49:59.942: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:49:59.942: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-g7dxk started at 2023-01-30 22:43:42 +0000 UTC (0+1 container statuses recorded) Jan 30 22:49:59.942: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:49:59.942: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-7jbrf started at 2023-01-30 22:43:43 +0000 UTC (0+1 container statuses recorded) Jan 30 22:49:59.942: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: false, restart count 0 Jan 30 22:49:59.942: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-vpcj2 started at 2023-01-30 22:43:43 +0000 UTC (0+1 container statuses recorded) Jan 30 22:49:59.942: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:49:59.942: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-tvmsg started at 2023-01-30 22:43:42 +0000 UTC (0+1 container statuses recorded) Jan 30 22:49:59.942: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:49:59.942: INFO: pod-ef13f0ad-efe6-4d7d-9388-1eb2e12461e1 started at <nil> (0+0 container statuses recorded) Jan 30 22:49:59.942: INFO: coredns-867df8f45c-q48mf started at 2023-01-30 22:39:48 +0000 UTC (0+1 container statuses recorded) Jan 30 22:49:59.942: INFO: Container coredns ready: true, restart count 0 Jan 30 22:49:59.942: INFO: service-proxy-toggled-n48z6 started at 2023-01-30 22:43:22 +0000 UTC (0+1 container statuses recorded) Jan 30 22:49:59.942: INFO: Container service-proxy-toggled ready: true, restart count 0 Jan 30 22:49:59.942: INFO: hostexec-ip-172-20-37-244.sa-east-1.compute.internal-pxhbj started at 2023-01-30 22:49:41 +0000 UTC (0+1 container statuses recorded) Jan 30 22:49:59.942: INFO: Container agnhost-container ready: true, restart count 0 Jan 30 22:49:59.942: INFO: ss2-1 started at 2023-01-30 22:43:37 +0000 UTC (0+1 container statuses recorded) Jan 30 22:49:59.942: INFO: Container webserver ready: true, restart count 0 Jan 30 22:49:59.942: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-5jrwl started at 2023-01-30 22:43:42 +0000 UTC (0+1 container statuses recorded) Jan 30 22:49:59.942: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:49:59.942: INFO: ebs-csi-node-wwnfq started at 2023-01-30 22:39:09 +0000 UTC (0+3 container statuses recorded) Jan 30 22:49:59.942: INFO: Container ebs-plugin ready: true, restart count 0 Jan 30 22:49:59.942: INFO: Container liveness-probe ready: true, restart count 0 Jan 30 22:49:59.942: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 30 22:49:59.942: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-jxx2t started at 2023-01-30 22:43:42 +0000 UTC (0+1 container statuses recorded) Jan 30 22:49:59.942: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:49:59.942: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-5qz9q started at 2023-01-30 22:43:42 +0000 UTC (0+1 container statuses recorded) Jan 30 22:49:59.942: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:49:59.942: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-xbf2p started at 2023-01-30 22:43:43 +0000 UTC (0+1 container statuses recorded) Jan 30 22:49:59.942: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:49:59.942: INFO: rs-hh4qw started at 2023-01-30 22:49:21 +0000 UTC (0+1 container statuses recorded) Jan 30 22:49:59.942: INFO: Container donothing ready: false, restart count 0 Jan 30 22:49:59.942: INFO: cilium-2kmmh started at 2023-01-30 22:39:09 +0000 UTC (1+1 container statuses recorded) Jan 30 22:49:59.942: INFO: Init container clean-cilium-state ready: true, restart count 0 Jan 30 22:49:59.942: INFO: Container cilium-agent ready: true, restart count 0 Jan 30 22:49:59.942: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-jvpvp started at 2023-01-30 22:43:42 +0000 UTC (0+1 container statuses recorded) Jan 30 22:49:59.942: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:50:01.015: INFO: Latency metrics for node ip-172-20-37-244.sa-east-1.compute.internal Jan 30 22:50:01.015: INFO: Logging node info for node ip-172-20-46-143.sa-east-1.compute.internal Jan 30 22:50:01.158: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-46-143.sa-east-1.compute.internal 4ac0f2fd-a06b-4650-9c4a-c2964727bf42 6657 0 2023-01-30 22:39:07 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:sa-east-1 failure-domain.beta.kubernetes.io/zone:sa-east-1a kubelet_cleanup:true kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-46-143.sa-east-1.compute.internal kubernetes.io/os:linux kubernetes.io/role:node node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:sa-east-1a topology.kubernetes.io/region:sa-east-1 topology.kubernetes.io/zone:sa-east-1a] map[csi.volume.kubernetes.io/nodeid:{"ebs.csi.aws.com":"i-0549a01609c77b117"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kops-controller Update v1 2023-01-30 22:39:07 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/node":{}}}} } {kube-controller-manager Update v1 2023-01-30 22:39:07 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.3.0/24\"":{}}}} } {kubelet Update v1 2023-01-30 22:39:07 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {e2e.test Update v1 2023-01-30 22:43:41 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubelet_cleanup":{}}}} } {kube-controller-manager Update v1 2023-01-30 22:49:03 +0000 UTC FieldsV1 {"f:status":{"f:volumesAttached":{}}} status} {kubelet Update v1 2023-01-30 22:49:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.3.0/24,DoNotUseExternalID:,ProviderID:aws:///sa-east-1a/i-0549a01609c77b117,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{49753808896 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4051652608 0} {<nil>} 3956692Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{44778427933 0} {<nil>} 44778427933 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3946795008 0} {<nil>} 3854292Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-30 22:49:11 +0000 UTC,LastTransitionTime:2023-01-30 22:39:07 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-30 22:49:11 +0000 UTC,LastTransitionTime:2023-01-30 22:39:07 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-30 22:49:11 +0000 UTC,LastTransitionTime:2023-01-30 22:39:07 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-30 22:49:11 +0000 UTC,LastTransitionTime:2023-01-30 22:39:28 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.46.143,},NodeAddress{Type:ExternalIP,Address:18.230.23.25,},NodeAddress{Type:Hostname,Address:ip-172-20-46-143.sa-east-1.compute.internal,},NodeAddress{Type:InternalDNS,Address:ip-172-20-46-143.sa-east-1.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-18-230-23-25.sa-east-1.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec25bac6007d23dab6609e76a6663500,SystemUUID:ec25bac6-007d-23da-b660-9e76a6663500,BootID:cd72b157-4d78-4df9-997f-bab559376690,KernelVersion:5.15.0-1028-aws,OSImage:Ubuntu 22.04.1 LTS,ContainerRuntimeVersion:docker://20.10.17,KubeletVersion:v1.23.16,KubeProxyVersion:v1.23.16,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5 quay.io/cilium/cilium:v1.12.5],SizeBytes:456569528,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:20f25f275d46aa728f7615a1ccc19c78b2ed89435bf943a44b339f70f45508e6 k8s.gcr.io/e2e-test-images/httpd:2.4.39-2],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e k8s.gcr.io/e2e-test-images/agnhost:2.39],SizeBytes:126872991,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.23.16 registry.k8s.io/kube-proxy-amd64:v1.23.16],SizeBytes:110832791,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:f0c5de192d832e7c1daa6580d4a62e8fa6fc8eabc0917ae4cb7ed4d15e95b59e registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.1],SizeBytes:94471559,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:19582112,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:17611980,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 k8s.gcr.io/e2e-test-images/nginx:1.14-2],SizeBytes:16032814,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:1154361,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6 registry.k8s.io/pause:3.6],SizeBytes:682696,},},VolumesInUse:[kubernetes.io/csi/ebs.csi.aws.com^vol-02a6ee60ede372824],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/ebs.csi.aws.com^vol-02a6ee60ede372824,DevicePath:,},},Config:nil,},} Jan 30 22:50:01.158: INFO: Logging kubelet events for node ip-172-20-46-143.sa-east-1.compute.internal Jan 30 22:50:01.303: INFO: Logging pods the kubelet thinks is on node ip-172-20-46-143.sa-east-1.compute.internal Jan 30 22:50:01.460: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-pt29g started at 2023-01-30 22:43:42 +0000 UTC (0+1 container statuses recorded) Jan 30 22:50:01.460: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:50:01.460: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-2sx9d started at 2023-01-30 22:43:43 +0000 UTC (0+1 container statuses recorded) Jan 30 22:50:01.460: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: false, restart count 0 Jan 30 22:50:01.460: INFO: ebs-csi-node-qjvfh started at 2023-01-30 22:39:08 +0000 UTC (0+3 container statuses recorded) Jan 30 22:50:01.460: INFO: Container ebs-plugin ready: true, restart count 0 Jan 30 22:50:01.460: INFO: Container liveness-probe ready: true, restart count 0 Jan 30 22:50:01.460: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 30 22:50:01.460: INFO: service-proxy-toggled-skcf6 started at 2023-01-30 22:43:22 +0000 UTC (0+1 container statuses recorded) Jan 30 22:50:01.460: INFO: Container service-proxy-toggled ready: true, restart count 0 Jan 30 22:50:01.460: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-2g6gh started at 2023-01-30 22:43:42 +0000 UTC (0+1 container statuses recorded) Jan 30 22:50:01.460: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:50:01.460: INFO: hostexec-ip-172-20-46-143.sa-east-1.compute.internal-g66zc started at 2023-01-30 22:49:26 +0000 UTC (0+1 container statuses recorded) Jan 30 22:50:01.460: INFO: Container agnhost-container ready: true, restart count 0 Jan 30 22:50:01.460: INFO: busybox-a7af2acc-d391-47b9-a765-65f447f16b43 started at 2023-01-30 22:47:38 +0000 UTC (0+1 container statuses recorded) Jan 30 22:50:01.460: INFO: Container busybox ready: false, restart count 0 Jan 30 22:50:01.460: INFO: pod-5e8df40f-c0b1-42a7-81f6-8305557f6e9f started at 2023-01-30 22:49:01 +0000 UTC (0+1 container statuses recorded) Jan 30 22:50:01.460: INFO: Container write-pod ready: false, restart count 0 Jan 30 22:50:01.460: INFO: pod-cf5ae510-5ee5-443b-b0c3-086ca0deda69 started at 2023-01-30 22:49:31 +0000 UTC (0+1 container statuses recorded) Jan 30 22:50:01.460: INFO: Container write-pod ready: false, restart count 0 Jan 30 22:50:01.460: INFO: ss2-2 started at 2023-01-30 22:44:16 +0000 UTC (0+1 container statuses recorded) Jan 30 22:50:01.460: INFO: Container webserver ready: true, restart count 0 Jan 30 22:50:01.460: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-8w4h2 started at 2023-01-30 22:43:42 +0000 UTC (0+1 container statuses recorded) Jan 30 22:50:01.460: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:50:01.460: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-4cqsj started at 2023-01-30 22:43:43 +0000 UTC (0+1 container statuses recorded) Jan 30 22:50:01.460: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:50:01.460: INFO: adopt-release-qqtpc started at 2023-01-30 22:49:55 +0000 UTC (0+1 container statuses recorded) Jan 30 22:50:01.460: INFO: Container c ready: false, restart count 0 Jan 30 22:50:01.460: INFO: adopt-release-rpjrs started at <nil> (0+0 container statuses recorded) Jan 30 22:50:01.460: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-dw5bz started at 2023-01-30 22:43:42 +0000 UTC (0+1 container statuses recorded) Jan 30 22:50:01.460: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:50:01.460: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-jh598 started at 2023-01-30 22:43:42 +0000 UTC (0+1 container statuses recorded) Jan 30 22:50:01.460: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:50:01.460: INFO: pod-subpath-test-preprovisionedpv-z9fs started at 2023-01-30 22:49:01 +0000 UTC (2+1 container statuses recorded) Jan 30 22:50:01.460: INFO: Init container init-volume-preprovisionedpv-z9fs ready: false, restart count 0 Jan 30 22:50:01.460: INFO: Init container test-init-subpath-preprovisionedpv-z9fs ready: false, restart count 0 Jan 30 22:50:01.460: INFO: Container test-container-subpath-preprovisionedpv-z9fs ready: false, restart count 0 Jan 30 22:50:01.460: INFO: ss2-0 started at 2023-01-30 22:46:20 +0000 UTC (0+1 container statuses recorded) Jan 30 22:50:01.460: INFO: Container webserver ready: false, restart count 0 Jan 30 22:50:01.460: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-2sw7v started at 2023-01-30 22:43:43 +0000 UTC (0+1 container statuses recorded) Jan 30 22:50:01.460: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:50:01.460: INFO: hostexec-ip-172-20-46-143.sa-east-1.compute.internal-s8w47 started at 2023-01-30 22:48:52 +0000 UTC (0+1 container statuses recorded) Jan 30 22:50:01.460: INFO: Container agnhost-container ready: true, restart count 0 Jan 30 22:50:01.460: INFO: pod-secrets-025721a8-1f1a-425c-b117-8841c9b333cd started at 2023-01-30 22:49:58 +0000 UTC (0+1 container statuses recorded) Jan 30 22:50:01.460: INFO: Container secret-volume-test ready: false, restart count 0 Jan 30 22:50:01.460: INFO: service-proxy-disabled-qfmmj started at 2023-01-30 22:42:49 +0000 UTC (0+1 container statuses recorded) Jan 30 22:50:01.460: INFO: Container service-proxy-disabled ready: true, restart count 0 Jan 30 22:50:01.460: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-q6pfk started at 2023-01-30 22:43:42 +0000 UTC (0+1 container statuses recorded) Jan 30 22:50:01.460: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:50:01.460: INFO: rs-8d5pg started at 2023-01-30 22:49:21 +0000 UTC (0+1 container statuses recorded) Jan 30 22:50:01.460: INFO: Container donothing ready: false, restart count 0 Jan 30 22:50:01.460: INFO: externalsvc-qmzsv started at 2023-01-30 22:49:02 +0000 UTC (0+1 container statuses recorded) Jan 30 22:50:01.460: INFO: Container externalsvc ready: false, restart count 0 Jan 30 22:50:01.460: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-bxpzz started at 2023-01-30 22:43:42 +0000 UTC (0+1 container statuses recorded) Jan 30 22:50:01.460: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:50:01.460: INFO: cilium-m624g started at 2023-01-30 22:39:08 +0000 UTC (1+1 container statuses recorded) Jan 30 22:50:01.460: INFO: Init container clean-cilium-state ready: true, restart count 0 Jan 30 22:50:01.460: INFO: Container cilium-agent ready: true, restart count 0 Jan 30 22:50:02.258: INFO: Latency metrics for node ip-172-20-46-143.sa-east-1.compute.internal Jan 30 22:50:02.258: INFO: Logging node info for node ip-172-20-56-33.sa-east-1.compute.internal Jan 30 22:50:02.405: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-56-33.sa-east-1.compute.internal 954986f9-8a0c-45d3-a91c-b10fd929b91d 6773 0 2023-01-30 22:39:06 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:sa-east-1 failure-domain.beta.kubernetes.io/zone:sa-east-1a kubelet_cleanup:true kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-56-33.sa-east-1.compute.internal kubernetes.io/os:linux kubernetes.io/role:node node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:sa-east-1a topology.kubernetes.io/region:sa-east-1 topology.kubernetes.io/zone:sa-east-1a] map[csi.volume.kubernetes.io/nodeid:{"ebs.csi.aws.com":"i-09e0b8ffb97d8ede2"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kops-controller Update v1 2023-01-30 22:39:06 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/node":{}}}} } {kubelet Update v1 2023-01-30 22:39:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-30 22:39:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.1.0/24\"":{}}}} } {e2e.test Update v1 2023-01-30 22:43:42 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubelet_cleanup":{}}}} } {kubelet Update v1 2023-01-30 22:44:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.1.0/24,DoNotUseExternalID:,ProviderID:aws:///sa-east-1a/i-09e0b8ffb97d8ede2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{49753808896 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4051644416 0} {<nil>} 3956684Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{44778427933 0} {<nil>} 44778427933 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3946786816 0} {<nil>} 3854284Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-30 22:49:18 +0000 UTC,LastTransitionTime:2023-01-30 22:39:06 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-30 22:49:18 +0000 UTC,LastTransitionTime:2023-01-30 22:39:06 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-30 22:49:18 +0000 UTC,LastTransitionTime:2023-01-30 22:39:06 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-30 22:49:18 +0000 UTC,LastTransitionTime:2023-01-30 22:39:26 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.56.33,},NodeAddress{Type:ExternalIP,Address:54.233.226.185,},NodeAddress{Type:Hostname,Address:ip-172-20-56-33.sa-east-1.compute.internal,},NodeAddress{Type:InternalDNS,Address:ip-172-20-56-33.sa-east-1.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-54-233-226-185.sa-east-1.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec2d7bffa4e33f064a7a3db7aac73580,SystemUUID:ec2d7bff-a4e3-3f06-4a7a-3db7aac73580,BootID:749c0ee0-ccbf-48a5-9702-baf2673813b3,KernelVersion:5.15.0-1028-aws,OSImage:Ubuntu 22.04.1 LTS,ContainerRuntimeVersion:docker://20.10.17,KubeletVersion:v1.23.16,KubeProxyVersion:v1.23.16,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5 quay.io/cilium/cilium:v1.12.5],SizeBytes:456569528,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e k8s.gcr.io/e2e-test-images/agnhost:2.39],SizeBytes:126872991,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.23.16 registry.k8s.io/kube-proxy-amd64:v1.23.16],SizeBytes:110832791,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:f0c5de192d832e7c1daa6580d4a62e8fa6fc8eabc0917ae4cb7ed4d15e95b59e registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.1],SizeBytes:94471559,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:50249337,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:68d396900aeaa072c1f27289485fdac29834045a6f3ffe369bf389d830ef572d registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.6],SizeBytes:47910609,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:19582112,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:17611980,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 k8s.gcr.io/e2e-test-images/nginx:1.14-2],SizeBytes:16032814,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:1154361,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6 registry.k8s.io/pause:3.6],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 30 22:50:02.405: INFO: Logging kubelet events for node ip-172-20-56-33.sa-east-1.compute.internal Jan 30 22:50:02.551: INFO: Logging pods the kubelet thinks is on node ip-172-20-56-33.sa-east-1.compute.internal Jan 30 22:50:02.704: INFO: service-proxy-disabled-tpm2q started at 2023-01-30 22:42:49 +0000 UTC (0+1 container statuses recorded) Jan 30 22:50:02.704: INFO: Container service-proxy-disabled ready: true, restart count 0 Jan 30 22:50:02.704: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-k9mvg started at 2023-01-30 22:43:43 +0000 UTC (0+1 container statuses recorded) Jan 30 22:50:02.704: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:50:02.704: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-rj6w6 started at 2023-01-30 22:43:43 +0000 UTC (0+1 container statuses recorded) Jan 30 22:50:02.704: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: false, restart count 0 Jan 30 22:50:02.704: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-hx4t7 started at 2023-01-30 22:43:42 +0000 UTC (0+1 container statuses recorded) Jan 30 22:50:02.704: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:50:02.704: INFO: csi-hostpathplugin-0 started at 2023-01-30 22:49:23 +0000 UTC (0+7 container statuses recorded) Jan 30 22:50:02.704: INFO: Container csi-attacher ready: false, restart count 0 Jan 30 22:50:02.704: INFO: Container csi-provisioner ready: false, restart count 0 Jan 30 22:50:02.704: INFO: Container csi-resizer ready: false, restart count 0 Jan 30 22:50:02.704: INFO: Container csi-snapshotter ready: false, restart count 0 Jan 30 22:50:02.704: INFO: Container hostpath ready: false, restart count 0 Jan 30 22:50:02.704: INFO: Container liveness-probe ready: false, restart count 0 Jan 30 22:50:02.704: INFO: Container node-driver-registrar ready: false, restart count 0 Jan 30 22:50:02.704: INFO: csi-hostpathplugin-0 started at 2023-01-30 22:45:43 +0000 UTC (0+7 container statuses recorded) Jan 30 22:50:02.704: INFO: Container csi-attacher ready: false, restart count 0 Jan 30 22:50:02.704: INFO: Container csi-provisioner ready: false, restart count 0 Jan 30 22:50:02.704: INFO: Container csi-resizer ready: false, restart count 0 Jan 30 22:50:02.704: INFO: Container csi-snapshotter ready: false, restart count 0 Jan 30 22:50:02.704: INFO: Container hostpath ready: false, restart count 0 Jan 30 22:50:02.704: INFO: Container liveness-probe ready: false, restart count 0 Jan 30 22:50:02.704: INFO: Container node-driver-registrar ready: false, restart count 0 Jan 30 22:50:02.704: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-kl9wl started at 2023-01-30 22:43:43 +0000 UTC (0+1 container statuses recorded) Jan 30 22:50:02.704: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:50:02.704: INFO: cilium-rrh22 started at 2023-01-30 22:39:07 +0000 UTC (1+1 container statuses recorded) Jan 30 22:50:02.704: INFO: Init container clean-cilium-state ready: true, restart count 0 Jan 30 22:50:02.704: INFO: Container cilium-agent ready: true, restart count 0 Jan 30 22:50:02.704: INFO: fail-once-non-local-7gtm7 started at 2023-01-30 22:44:15 +0000 UTC (0+1 container statuses recorded) Jan 30 22:50:02.704: INFO: Container c ready: false, restart count 0 Jan 30 22:50:02.704: INFO: local-injector started at 2023-01-30 22:50:01 +0000 UTC (0+1 container statuses recorded) Jan 30 22:50:02.704: INFO: Container local-injector ready: false, restart count 0 Jan 30 22:50:02.704: INFO: fail-once-non-local-nvn9l started at 2023-01-30 22:43:53 +0000 UTC (0+1 container statuses recorded) Jan 30 22:50:02.704: INFO: Container c ready: false, restart count 0 Jan 30 22:50:02.704: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-cxdvn started at 2023-01-30 22:43:42 +0000 UTC (0+1 container statuses recorded) Jan 30 22:50:02.704: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:50:02.704: INFO: ebs-csi-node-846kf started at 2023-01-30 22:39:07 +0000 UTC (0+3 container statuses recorded) Jan 30 22:50:02.704: INFO: Container ebs-plugin ready: true, restart count 0 Jan 30 22:50:02.704: INFO: Container liveness-probe ready: true, restart count 0 Jan 30 22:50:02.704: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 30 22:50:02.704: INFO: coredns-867df8f45c-txv2h started at 2023-01-30 22:39:26 +0000 UTC (0+1 container statuses recorded) Jan 30 22:50:02.704: INFO: Container coredns ready: true, restart count 0 Jan 30 22:50:02.704: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-ctp24 started at 2023-01-30 22:43:42 +0000 UTC (0+1 container statuses recorded) Jan 30 22:50:02.704: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:50:02.704: INFO: hostexec-ip-172-20-56-33.sa-east-1.compute.internal-tnngb started at 2023-01-30 22:49:40 +0000 UTC (0+1 container statuses recorded) Jan 30 22:50:02.704: INFO: Container agnhost-container ready: true, restart count 0 Jan 30 22:50:02.704: INFO: coredns-autoscaler-557ccb4c66-vs6br started at 2023-01-30 22:39:26 +0000 UTC (0+1 container statuses recorded) Jan 30 22:50:02.704: INFO: Container autoscaler ready: true, restart count 0 Jan 30 22:50:02.704: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-rqjpx started at 2023-01-30 22:43:42 +0000 UTC (0+1 container statuses recorded) Jan 30 22:50:02.704: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:50:02.704: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-fmwp2 started at 2023-01-30 22:43:42 +0000 UTC (0+1 container statuses recorded) Jan 30 22:50:02.704: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:50:02.704: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-g95mq started at 2023-01-30 22:43:42 +0000 UTC (0+1 container statuses recorded) Jan 30 22:50:02.704: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:50:02.704: INFO: ss2-0 started at 2023-01-30 22:42:49 +0000 UTC (0+1 container statuses recorded) Jan 30 22:50:02.704: INFO: Container webserver ready: true, restart count 0 Jan 30 22:50:02.704: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-z46zz started at 2023-01-30 22:43:42 +0000 UTC (0+1 container statuses recorded) Jan 30 22:50:02.704: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: false, restart count 0 Jan 30 22:50:02.704: INFO: fail-once-non-local-ksmfx started at 2023-01-30 22:43:53 +0000 UTC (0+1 container statuses recorded) Jan 30 22:50:02.704: INFO: Container c ready: false, restart count 0 Jan 30 22:50:03.241: INFO: Latency metrics for node ip-172-20-56-33.sa-east-1.compute.internal Jan 30 22:50:03.242: INFO: Logging node info for node ip-172-20-63-44.sa-east-1.compute.internal Jan 30 22:50:03.385: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-63-44.sa-east-1.compute.internal f7fcefff-e13d-4383-8796-cdc02ac9be26 7035 0 2023-01-30 22:37:10 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:c5.large beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:sa-east-1 failure-domain.beta.kubernetes.io/zone:sa-east-1a kops.k8s.io/kops-controller-pki: kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-63-44.sa-east-1.compute.internal kubernetes.io/os:linux kubernetes.io/role:master node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers: node.kubernetes.io/instance-type:c5.large topology.ebs.csi.aws.com/zone:sa-east-1a topology.kubernetes.io/region:sa-east-1 topology.kubernetes.io/zone:sa-east-1a] map[csi.volume.kubernetes.io/nodeid:{"ebs.csi.aws.com":"i-020b2e4354e67a776"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-30 22:37:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {protokube Update v1 2023-01-30 22:37:31 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kops.k8s.io/kops-controller-pki":{},"f:node-role.kubernetes.io/control-plane":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}} } {kops-controller Update v1 2023-01-30 22:37:33 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/master":{}}}} } {kube-controller-manager Update v1 2023-01-30 22:37:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.0.0/24\"":{}},"f:taints":{}}} } {kubelet Update v1 2023-01-30 22:38:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.0.0/24,DoNotUseExternalID:,ProviderID:aws:///sa-east-1a/i-020b2e4354e67a776,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[100.96.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{49753808896 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3862913024 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{44778427933 0} {<nil>} 44778427933 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3758055424 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-30 22:49:37 +0000 UTC,LastTransitionTime:2023-01-30 22:37:10 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-30 22:49:37 +0000 UTC,LastTransitionTime:2023-01-30 22:37:10 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-30 22:49:37 +0000 UTC,LastTransitionTime:2023-01-30 22:37:10 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-30 22:49:37 +0000 UTC,LastTransitionTime:2023-01-30 22:37:51 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.63.44,},NodeAddress{Type:ExternalIP,Address:18.230.69.200,},NodeAddress{Type:Hostname,Address:ip-172-20-63-44.sa-east-1.compute.internal,},NodeAddress{Type:InternalDNS,Address:ip-172-20-63-44.sa-east-1.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-18-230-69-200.sa-east-1.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec2866a629da92bef6391329a4d3d367,SystemUUID:ec2866a6-29da-92be-f639-1329a4d3d367,BootID:a943fe41-4bc4-4772-98e1-0ba5a25bcb7f,KernelVersion:5.15.0-1028-aws,OSImage:Ubuntu 22.04.1 LTS,ContainerRuntimeVersion:docker://20.10.17,KubeletVersion:v1.23.16,KubeProxyVersion:v1.23.16,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/etcdadm/etcd-manager@sha256:66a453db625abb268f4b3bbefc5a34a171d81e6e8796cecca54cfd71775c77c4 registry.k8s.io/etcdadm/etcd-manager:v3.0.20221209],SizeBytes:662209237,},ContainerImage{Names:[quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5 quay.io/cilium/cilium:v1.12.5],SizeBytes:456569528,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver-amd64:v1.23.16 registry.k8s.io/kube-apiserver-amd64:v1.23.16],SizeBytes:129999849,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager-amd64:v1.23.16 registry.k8s.io/kube-controller-manager-amd64:v1.23.16],SizeBytes:119940367,},ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.23.16 registry.k8s.io/kube-proxy-amd64:v1.23.16],SizeBytes:110832791,},ContainerImage{Names:[quay.io/cilium/operator@sha256:a6d24a006a6b92967ac90786b49bc1ac26e5477cf028cd1186efcfc2466484db quay.io/cilium/operator:v1.12.5],SizeBytes:110016956,},ContainerImage{Names:[registry.k8s.io/kops/kops-controller:1.26.0-beta.2],SizeBytes:106139107,},ContainerImage{Names:[registry.k8s.io/kops/dns-controller:1.26.0-beta.2],SizeBytes:102637092,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:f0c5de192d832e7c1daa6580d4a62e8fa6fc8eabc0917ae4cb7ed4d15e95b59e registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.1],SizeBytes:94471559,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:122bfb8c1edabb3c0edd63f06523e6940d958d19b3957dc7b1d6f81e9f1f6119 registry.k8s.io/sig-storage/csi-provisioner:v3.1.0],SizeBytes:57736070,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:9ebbf9f023e7b41ccee3d52afe39a89e3ddacdbb69269d583abfc25847cfd9e4 registry.k8s.io/sig-storage/csi-resizer:v1.4.0],SizeBytes:55464778,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:8b9c313c05f54fb04f8d430896f5f5904b6cb157df261501b29adc04d2b2dc7b registry.k8s.io/sig-storage/csi-attacher:v3.4.0],SizeBytes:54848595,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler-amd64:v1.23.16 registry.k8s.io/kube-scheduler-amd64:v1.23.16],SizeBytes:51852546,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:19582112,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:17611980,},ContainerImage{Names:[registry.k8s.io/kops/kube-apiserver-healthcheck:1.26.0-beta.2],SizeBytes:8786911,},ContainerImage{Names:[registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause:3.6],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 30 22:50:03.385: INFO: Logging kubelet events for node ip-172-20-63-44.sa-east-1.compute.internal Jan 30 22:50:03.530: INFO: Logging pods the kubelet thinks is on node ip-172-20-63-44.sa-east-1.compute.internal Jan 30 22:50:03.679: INFO: kube-scheduler-ip-172-20-63-44.sa-east-1.compute.internal started at 2023-01-30 22:36:20 +0000 UTC (0+1 container statuses recorded) Jan 30 22:50:03.679: INFO: Container kube-scheduler ready: true, restart count 0 Jan 30 22:50:03.679: INFO: ebs-csi-node-crhx2 started at 2023-01-30 22:37:30 +0000 UTC (0+3 container statuses recorded) Jan 30 22:50:03.679: INFO: Container ebs-plugin ready: true, restart count 0 Jan 30 22:50:03.679: INFO: Container liveness-probe ready: true, restart count 0 Jan 30 22:50:03.679: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 30 22:50:03.679: INFO: cilium-bg2hw started at 2023-01-30 22:37:30 +0000 UTC (1+1 container statuses recorded) Jan 30 22:50:03.679: INFO: Init container clean-cilium-state ready: true, restart count 0 Jan 30 22:50:03.679: INFO: Container cilium-agent ready: true, restart count 0 Jan 30 22:50:03.679: INFO: etcd-manager-events-ip-172-20-63-44.sa-east-1.compute.internal started at 2023-01-30 22:36:20 +0000 UTC (0+1 container statuses recorded) Jan 30 22:50:03.679: INFO: Container etcd-manager ready: true, restart count 0 Jan 30 22:50:03.679: INFO: etcd-manager-main-ip-172-20-63-44.sa-east-1.compute.internal started at 2023-01-30 22:36:20 +0000 UTC (0+1 container statuses recorded) Jan 30 22:50:03.679: INFO: Container etcd-manager ready: true, restart count 0 Jan 30 22:50:03.679: INFO: kops-controller-mrlzz started at 2023-01-30 22:37:31 +0000 UTC (0+1 container statuses recorded) Jan 30 22:50:03.679: INFO: Container kops-controller ready: true, restart count 0 Jan 30 22:50:03.679: INFO: dns-controller-58d7bbb845-vwkl6 started at 2023-01-30 22:37:32 +0000 UTC (0+1 container statuses recorded) Jan 30 22:50:03.679: INFO: Container dns-controller ready: true, restart count 0 Jan 30 22:50:03.679: INFO: cilium-operator-c7bfc9f44-bhw9j started at 2023-01-30 22:37:32 +0000 UTC (0+1 container statuses recorded) Jan 30 22:50:03.679: INFO: Container cilium-operator ready: true, restart count 0 Jan 30 22:50:03.679: INFO: ebs-csi-controller-6dbc9bb9b4-zt6h6 started at 2023-01-30 22:37:32 +0000 UTC (0+5 container statuses recorded) Jan 30 22:50:03.679: INFO: Container csi-attacher ready: true, restart count 0 Jan 30 22:50:03.679: INFO: Container csi-provisioner ready: true, restart count 0 Jan 30 22:50:03.679: INFO: Container csi-resizer ready: true, restart count 0 Jan 30 22:50:03.679: INFO: Container ebs-plugin ready: true, restart count 0 Jan 30 22:50:03.679: INFO: Container liveness-probe ready: true, restart count 0 Jan 30 22:50:03.679: INFO: kube-apiserver-ip-172-20-63-44.sa-east-1.compute.internal started at 2023-01-30 22:36:20 +0000 UTC (0+2 container statuses recorded) Jan 30 22:50:03.679: INFO: Container healthcheck ready: true, restart count 0 Jan 30 22:50:03.679: INFO: Container kube-apiserver ready: true, restart count 1 Jan 30 22:50:03.679: INFO: kube-controller-manager-ip-172-20-63-44.sa-east-1.compute.internal started at 2023-01-30 22:36:20 +0000 UTC (0+1 container statuses recorded) Jan 30 22:50:03.679: INFO: Container kube-controller-manager ready: true, restart count 2 Jan 30 22:50:04.129: INFO: Latency metrics for node ip-172-20-63-44.sa-east-1.compute.internal Jan 30 22:50:04.129: INFO: Logging node info for node ip-172-20-63-7.sa-east-1.compute.internal Jan 30 22:50:04.272: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-63-7.sa-east-1.compute.internal 8ee09ce8-ad2c-4347-b6b0-a38439fe8b38 7860 0 2023-01-30 22:39:06 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:sa-east-1 failure-domain.beta.kubernetes.io/zone:sa-east-1a kubelet_cleanup:true kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-63-7.sa-east-1.compute.internal kubernetes.io/os:linux kubernetes.io/role:node node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:sa-east-1a topology.hostpath.csi/node:ip-172-20-63-7.sa-east-1.compute.internal topology.kubernetes.io/region:sa-east-1 topology.kubernetes.io/zone:sa-east-1a] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-ephemeral-5132":"ip-172-20-63-7.sa-east-1.compute.internal","ebs.csi.aws.com":"i-02d1af952f8cb9055"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kops-controller Update v1 2023-01-30 22:39:06 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/node":{}}}} } {kube-controller-manager Update v1 2023-01-30 22:39:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.2.0/24\"":{}}}} } {kubelet Update v1 2023-01-30 22:39:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {e2e.test Update v1 2023-01-30 22:43:42 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubelet_cleanup":{}}}} } {kube-controller-manager Update v1 2023-01-30 22:44:35 +0000 UTC FieldsV1 {"f:status":{"f:volumesAttached":{}}} status} {kubelet Update v1 2023-01-30 22:45:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.2.0/24,DoNotUseExternalID:,ProviderID:aws:///sa-east-1a/i-02d1af952f8cb9055,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{49753808896 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4051644416 0} {<nil>} 3956684Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{44778427933 0} {<nil>} 44778427933 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3946786816 0} {<nil>} 3854284Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-30 22:49:42 +0000 UTC,LastTransitionTime:2023-01-30 22:39:06 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-30 22:49:42 +0000 UTC,LastTransitionTime:2023-01-30 22:39:06 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-30 22:49:42 +0000 UTC,LastTransitionTime:2023-01-30 22:39:06 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-30 22:49:42 +0000 UTC,LastTransitionTime:2023-01-30 22:39:27 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.63.7,},NodeAddress{Type:ExternalIP,Address:52.67.57.31,},NodeAddress{Type:Hostname,Address:ip-172-20-63-7.sa-east-1.compute.internal,},NodeAddress{Type:InternalDNS,Address:ip-172-20-63-7.sa-east-1.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-52-67-57-31.sa-east-1.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec292dc0bb9ad655da1bd5cf4f054caa,SystemUUID:ec292dc0-bb9a-d655-da1b-d5cf4f054caa,BootID:3aa9a5e0-6628-460f-859b-942e6b19dc1d,KernelVersion:5.15.0-1028-aws,OSImage:Ubuntu 22.04.1 LTS,ContainerRuntimeVersion:docker://20.10.17,KubeletVersion:v1.23.16,KubeProxyVersion:v1.23.16,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5 quay.io/cilium/cilium:v1.12.5],SizeBytes:456569528,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e k8s.gcr.io/e2e-test-images/agnhost:2.39],SizeBytes:126872991,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.23.16 registry.k8s.io/kube-proxy-amd64:v1.23.16],SizeBytes:110832791,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:f0c5de192d832e7c1daa6580d4a62e8fa6fc8eabc0917ae4cb7ed4d15e95b59e registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.1],SizeBytes:94471559,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0],SizeBytes:56382329,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:4e74c0492bceddc598de1c90cc5bc14dcda94cb49fa9c5bad9d117c4834b5e08 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.1],SizeBytes:56379269,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:36c31f7e1f433c9634d24f876353e8646246d81a03c4e351202c2644daff1620 k8s.gcr.io/sig-storage/csi-resizer:v1.2.0],SizeBytes:53956308,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-snapshotter@sha256:ed98431376c9e944e19a465fe8ea944806714dd95416a0821096c78d66b579bd k8s.gcr.io/sig-storage/csi-snapshotter:v4.1.1],SizeBytes:53531990,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:c5be65d6679efabb969d9b019300d187437ae876f992c40911fd2892bbef3b36 k8s.gcr.io/sig-storage/csi-attacher:v3.2.0],SizeBytes:53500826,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:60ab9b3e6a030d3038c87c0d6bca2930f58d1d72823e6a4af09767dc83b696a2 k8s.gcr.io/sig-storage/csi-attacher:v3.2.1],SizeBytes:53499570,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:7a5ba58a44e0d749e0767e4e37315bcf6a61f33ce3185c1991848af4db0fb70a k8s.gcr.io/sig-storage/csi-resizer:v1.1.0],SizeBytes:49176472,},ContainerImage{Names:[k8s.gcr.io/sig-storage/hostpathplugin@sha256:232fe80174d60d520d36043103853a1d7ab4b7f3782cf43e45034f04ccda58ce k8s.gcr.io/sig-storage/hostpathplugin:v1.7.1],SizeBytes:29977921,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:19662887,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:19582112,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:2dee3fe5fe861bb66c3a4ac51114f3447a4cd35870e0f2e2b558c7a400d89589 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0],SizeBytes:18699436,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:17680993,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:17611980,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:529be2c9770add0cdd0c989115222ea9fc1be430c11095eb9f6dafcf98a36e2b k8s.gcr.io/sig-storage/livenessprobe:v2.4.0],SizeBytes:17175488,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:1154361,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6 registry.k8s.io/pause:3.6],SizeBytes:682696,},},VolumesInUse:[kubernetes.io/csi/ebs.csi.aws.com^vol-0f9eeed3999f6acd3],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/ebs.csi.aws.com^vol-0f9eeed3999f6acd3,DevicePath:,},},Config:nil,},} Jan 30 22:50:04.272: INFO: Logging kubelet events for node ip-172-20-63-7.sa-east-1.compute.internal Jan 30 22:50:04.418: INFO: Logging pods the kubelet thinks is on node ip-172-20-63-7.sa-east-1.compute.internal Jan 30 22:50:04.572: INFO: externalsvc-c5qz7 started at 2023-01-30 22:49:02 +0000 UTC (0+1 container statuses recorded) Jan 30 22:50:04.572: INFO: Container externalsvc ready: false, restart count 0 Jan 30 22:50:04.572: INFO: service-proxy-disabled-4zt6p started at 2023-01-30 22:42:48 +0000 UTC (0+1 container statuses recorded) Jan 30 22:50:04.572: INFO: Container service-proxy-disabled ready: true, restart count 0 Jan 30 22:50:04.572: INFO: csi-mockplugin-0 started at 2023-01-30 22:42:54 +0000 UTC (0+3 container statuses recorded) Jan 30 22:50:04.572: INFO: Container csi-provisioner ready: true, restart count 0 Jan 30 22:50:04.572: INFO: Container driver-registrar ready: true, restart count 0 Jan 30 22:50:04.572: INFO: Container mock ready: true, restart count 0 Jan 30 22:50:04.572: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-v8ln8 started at 2023-01-30 22:43:43 +0000 UTC (0+1 container statuses recorded) Jan 30 22:50:04.572: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:50:04.572: INFO: csi-mockplugin-0 started at 2023-01-30 22:49:11 +0000 UTC (0+3 container statuses recorded) Jan 30 22:50:04.572: INFO: Container csi-provisioner ready: false, restart count 0 Jan 30 22:50:04.572: INFO: Container driver-registrar ready: false, restart count 0 Jan 30 22:50:04.572: INFO: Container mock ready: false, restart count 0 Jan 30 22:50:04.572: INFO: pod-handle-http-request started at 2023-01-30 22:44:58 +0000 UTC (0+1 container statuses recorded) Jan 30 22:50:04.572: INFO: Container agnhost-container ready: false, restart count 0 Jan 30 22:50:04.572: INFO: startup-d0748011-46a8-4bb4-9fe0-3c4baf5fbfed started at 2023-01-30 22:49:08 +0000 UTC (0+1 container statuses recorded) Jan 30 22:50:04.573: INFO: Container busybox ready: false, restart count 0 Jan 30 22:50:04.573: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-cnpbk started at 2023-01-30 22:43:42 +0000 UTC (0+1 container statuses recorded) Jan 30 22:50:04.573: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: false, restart count 0 Jan 30 22:50:04.573: INFO: ebs-csi-node-wc6gx started at 2023-01-30 22:39:07 +0000 UTC (0+3 container statuses recorded) Jan 30 22:50:04.573: INFO: Container ebs-plugin ready: true, restart count 0 Jan 30 22:50:04.573: INFO: Container liveness-probe ready: true, restart count 0 Jan 30 22:50:04.573: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 30 22:50:04.573: INFO: rs-4k8s4 started at 2023-01-30 22:49:21 +0000 UTC (0+1 container statuses recorded) Jan 30 22:50:04.573: INFO: Container donothing ready: false, restart count 0 Jan 30 22:50:04.573: INFO: csi-hostpathplugin-0 started at 2023-01-30 22:43:41 +0000 UTC (0+7 container statuses recorded) Jan 30 22:50:04.573: INFO: Container csi-attacher ready: true, restart count 0 Jan 30 22:50:04.573: INFO: Container csi-provisioner ready: true, restart count 0 Jan 30 22:50:04.573: INFO: Container csi-resizer ready: true, restart count 0 Jan 30 22:50:04.573: INFO: Container csi-snapshotter ready: true, restart count 0 Jan 30 22:50:04.573: INFO: Container hostpath ready: true, restart count 0 Jan 30 22:50:04.573: INFO: Container liveness-probe ready: true, restart count 0 Jan 30 22:50:04.573: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 30 22:50:04.573: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-56dt8 started at 2023-01-30 22:43:42 +0000 UTC (0+1 container statuses recorded) Jan 30 22:50:04.573: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: false, restart count 0 Jan 30 22:50:04.573: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-2m4f6 started at 2023-01-30 22:43:43 +0000 UTC (0+1 container statuses recorded) Jan 30 22:50:04.573: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: false, restart count 0 Jan 30 22:50:04.573: INFO: verify-service-up-exec-pod-p5sdh started at 2023-01-30 22:45:14 +0000 UTC (0+1 container statuses recorded) Jan 30 22:50:04.573: INFO: Container agnhost-container ready: false, restart count 0 Jan 30 22:50:04.573: INFO: pod-834b9c41-ef01-4e0b-b9fc-0f86ce4cb398 started at 2023-01-30 22:48:58 +0000 UTC (0+1 container statuses recorded) Jan 30 22:50:04.573: INFO: Container write-pod ready: false, restart count 0 Jan 30 22:50:04.573: INFO: verify-service-up-host-exec-pod started at 2023-01-30 22:45:07 +0000 UTC (0+1 container statuses recorded) Jan 30 22:50:04.573: INFO: Container agnhost-container ready: true, restart count 0 Jan 30 22:50:04.573: INFO: pod-d8cff309-3d6a-4ce5-9ac9-b57de7155461 started at 2023-01-30 22:45:48 +0000 UTC (0+1 container statuses recorded) Jan 30 22:50:04.573: INFO: Container write-pod ready: false, restart count 0 Jan 30 22:50:04.573: INFO: csi-mockplugin-attacher-0 started at 2023-01-30 22:49:11 +0000 UTC (0+1 container statuses recorded) Jan 30 22:50:04.573: INFO: Container csi-attacher ready: false, restart count 0 Jan 30 22:50:04.573: INFO: inline-volume-tester-62nrc started at 2023-01-30 22:43:42 +0000 UTC (0+1 container statuses recorded) Jan 30 22:50:04.573: INFO: Container csi-volume-tester ready: false, restart count 0 Jan 30 22:50:04.573: INFO: service-proxy-toggled-bpxnj started at 2023-01-30 22:43:22 +0000 UTC (0+1 container statuses recorded) Jan 30 22:50:04.573: INFO: Container service-proxy-toggled ready: true, restart count 0 Jan 30 22:50:04.573: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-fq2r6 started at 2023-01-30 22:43:42 +0000 UTC (0+1 container statuses recorded) Jan 30 22:50:04.573: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:50:04.573: INFO: csi-mockplugin-attacher-0 started at 2023-01-30 22:42:54 +0000 UTC (0+1 container statuses recorded) Jan 30 22:50:04.573: INFO: Container csi-attacher ready: true, restart count 0 Jan 30 22:50:04.573: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-9t9kb started at 2023-01-30 22:43:42 +0000 UTC (0+1 container statuses recorded) Jan 30 22:50:04.573: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:50:04.573: INFO: ss2-1 started at 2023-01-30 22:45:23 +0000 UTC (0+1 container statuses recorded) Jan 30 22:50:04.573: INFO: Container webserver ready: false, restart count 0 Jan 30 22:50:04.573: INFO: cilium-qtf8x started at 2023-01-30 22:39:07 +0000 UTC (1+1 container statuses recorded) Jan 30 22:50:04.573: INFO: Init container clean-cilium-state ready: true, restart count 0 Jan 30 22:50:04.573: INFO: Container cilium-agent ready: true, restart count 0 Jan 30 22:50:04.573: INFO: hostexec-ip-172-20-63-7.sa-east-1.compute.internal-grfw9 started at 2023-01-30 22:45:37 +0000 UTC (0+1 container statuses recorded) Jan 30 22:50:04.573: INFO: Container agnhost-container ready: true, restart count 0 Jan 30 22:50:04.573: INFO: hostexec-ip-172-20-63-7.sa-east-1.compute.internal-lx82t started at 2023-01-30 22:45:16 +0000 UTC (0+1 container statuses recorded) Jan 30 22:50:04.573: INFO: Container agnhost-container ready: true, restart count 0 Jan 30 22:50:04.573: INFO: pod-c28eace7-f9af-4aa6-896f-40a90618d6c5 started at 2023-01-30 22:45:23 +0000 UTC (0+1 container statuses recorded) Jan 30 22:50:04.573: INFO: Container write-pod ready: false, restart count 0 Jan 30 22:50:04.573: INFO: csi-mockplugin-resizer-0 started at 2023-01-30 22:42:54 +0000 UTC (0+1 container statuses recorded) Jan 30 22:50:04.573: INFO: Container csi-resizer ready: true, restart count 0 Jan 30 22:50:04.573: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-nft6k started at 2023-01-30 22:43:42 +0000 UTC (0+1 container statuses recorded) Jan 30 22:50:04.573: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:50:04.573: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-zrntw started at 2023-01-30 22:43:42 +0000 UTC (0+1 container statuses recorded) Jan 30 22:50:04.573: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:50:04.573: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-fp6pt started at 2023-01-30 22:43:42 +0000 UTC (0+1 container statuses recorded) Jan 30 22:50:04.573: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: false, restart count 0 Jan 30 22:50:04.573: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-qbtpc started at 2023-01-30 22:43:43 +0000 UTC (0+1 container statuses recorded) Jan 30 22:50:04.573: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:50:05.527: INFO: Latency metrics for node ip-172-20-63-7.sa-east-1.compute.internal Jan 30 22:50:05.527: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "container-lifecycle-hook-3887" for this suite.
Filter through log files
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-node\]\sContainer\sRuntime\sblackbox\stest\son\sterminated\scontainer\sshould\sreport\stermination\smessage\sfrom\sfile\swhen\spod\ssucceeds\sand\sTerminationMessagePolicy\sFallbackToLogsOnError\sis\sset\s\[Excluded\:WindowsDocker\]\s\[NodeConformance\]\s\[Conformance\]$'
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Jan 30 22:55:49.148: Timed out after 300.001s. Expected <v1.PodPhase>: Pending to equal <v1.PodPhase>: Succeeded /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:154
[BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 30 22:50:48.004: INFO: >>> kubeConfig: /root/.kube/config �[1mSTEP�[0m: Building a namespace api object, basename container-runtime �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should report termination message from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [Excluded:WindowsDocker] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: create the container �[1mSTEP�[0m: wait for the container to reach Succeeded Jan 30 22:55:49.148: FAIL: Timed out after 300.001s. Expected <v1.PodPhase>: Pending to equal <v1.PodPhase>: Succeeded Full Stack Trace k8s.io/kubernetes/test/e2e/common/node.glob..func17.1.2.1({{0x70c8b1c, 0x1d}, {0xc000078c60, 0x29}, {0xc0046681a0, 0x2, 0x2}, {0xc003b474b0, 0x1, 0x1}, ...}, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:154 +0x392 k8s.io/kubernetes/test/e2e/common/node.glob..func17.1.2.6() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:262 +0x193 k8s.io/kubernetes/test/e2e.RunE2ETests(0x24eec17?) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:133 +0x6b7 k8s.io/kubernetes/test/e2e.TestE2E(0x24602d9?) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc000102d00, 0x72ecb90) /usr/local/go/src/testing/testing.go:1446 +0x10b created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1493 +0x35f [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 �[1mSTEP�[0m: Collecting events from namespace "container-runtime-7131". �[1mSTEP�[0m: Found 12 events. Jan 30 22:55:49.442: INFO: At 2023-01-30 22:50:49 +0000 UTC - event for termination-message-container273f6b47-08fc-4262-a3ab-3617fa0ad4c2: {default-scheduler } Scheduled: Successfully assigned container-runtime-7131/termination-message-container273f6b47-08fc-4262-a3ab-3617fa0ad4c2 to ip-172-20-63-7.sa-east-1.compute.internal Jan 30 22:55:49.442: INFO: At 2023-01-30 22:50:50 +0000 UTC - event for termination-message-container273f6b47-08fc-4262-a3ab-3617fa0ad4c2: {kubelet ip-172-20-63-7.sa-east-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "2c2f427812f8fb9775db5525002066f430bd59abccc4d663b0d7c6782f2bfdf7" network for pod "termination-message-container273f6b47-08fc-4262-a3ab-3617fa0ad4c2": networkPlugin cni failed to set up pod "termination-message-container273f6b47-08fc-4262-a3ab-3617fa0ad4c2_container-runtime-7131" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 30 22:55:49.442: INFO: At 2023-01-30 22:50:53 +0000 UTC - event for termination-message-container273f6b47-08fc-4262-a3ab-3617fa0ad4c2: {kubelet ip-172-20-63-7.sa-east-1.compute.internal} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 22:55:49.442: INFO: At 2023-01-30 22:50:55 +0000 UTC - event for termination-message-container273f6b47-08fc-4262-a3ab-3617fa0ad4c2: {kubelet ip-172-20-63-7.sa-east-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "5aa33bbb9b520da1658675748e2880252e1b41de062cffbbfa0413d6d391d5ac" network for pod "termination-message-container273f6b47-08fc-4262-a3ab-3617fa0ad4c2": networkPlugin cni failed to set up pod "termination-message-container273f6b47-08fc-4262-a3ab-3617fa0ad4c2_container-runtime-7131" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 30 22:55:49.442: INFO: At 2023-01-30 22:50:58 +0000 UTC - event for termination-message-container273f6b47-08fc-4262-a3ab-3617fa0ad4c2: {kubelet ip-172-20-63-7.sa-east-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "e1cec7fc2623dda09f61bb5b4c8914bc2e02f891b209c8f4a13e07527b98b9a2" network for pod "termination-message-container273f6b47-08fc-4262-a3ab-3617fa0ad4c2": networkPlugin cni failed to set up pod "termination-message-container273f6b47-08fc-4262-a3ab-3617fa0ad4c2_container-runtime-7131" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 30 22:55:49.442: INFO: At 2023-01-30 22:51:01 +0000 UTC - event for termination-message-container273f6b47-08fc-4262-a3ab-3617fa0ad4c2: {kubelet ip-172-20-63-7.sa-east-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "15ad56e9a95f5d3b7048504cf253d575e2854fb0747122b8f155f39ba100bb4a" network for pod "termination-message-container273f6b47-08fc-4262-a3ab-3617fa0ad4c2": networkPlugin cni failed to set up pod "termination-message-container273f6b47-08fc-4262-a3ab-3617fa0ad4c2_container-runtime-7131" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 30 22:55:49.442: INFO: At 2023-01-30 22:51:03 +0000 UTC - event for termination-message-container273f6b47-08fc-4262-a3ab-3617fa0ad4c2: {kubelet ip-172-20-63-7.sa-east-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "e56ec4ff3a12eed3eeefa4fe11908a05d01849e20073a445b6ce35aedc5e00ab" network for pod "termination-message-container273f6b47-08fc-4262-a3ab-3617fa0ad4c2": networkPlugin cni failed to set up pod "termination-message-container273f6b47-08fc-4262-a3ab-3617fa0ad4c2_container-runtime-7131" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 30 22:55:49.442: INFO: At 2023-01-30 22:51:06 +0000 UTC - event for termination-message-container273f6b47-08fc-4262-a3ab-3617fa0ad4c2: {kubelet ip-172-20-63-7.sa-east-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "9e0b4146f94bead47a1579d535352492b95eb33e3ae2a28b46c870d7469f0029" network for pod "termination-message-container273f6b47-08fc-4262-a3ab-3617fa0ad4c2": networkPlugin cni failed to set up pod "termination-message-container273f6b47-08fc-4262-a3ab-3617fa0ad4c2_container-runtime-7131" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 30 22:55:49.442: INFO: At 2023-01-30 22:51:08 +0000 UTC - event for termination-message-container273f6b47-08fc-4262-a3ab-3617fa0ad4c2: {kubelet ip-172-20-63-7.sa-east-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "833b8690e831118c34138ee12cb691936ca4a8ec5feee9f9e422a0047f663cb1" network for pod "termination-message-container273f6b47-08fc-4262-a3ab-3617fa0ad4c2": networkPlugin cni failed to set up pod "termination-message-container273f6b47-08fc-4262-a3ab-3617fa0ad4c2_container-runtime-7131" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 30 22:55:49.442: INFO: At 2023-01-30 22:51:11 +0000 UTC - event for termination-message-container273f6b47-08fc-4262-a3ab-3617fa0ad4c2: {kubelet ip-172-20-63-7.sa-east-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "00c4a7c66661db3c289baf1e07cf6ee084f890de6ac3c3e9d738decac986b677" network for pod "termination-message-container273f6b47-08fc-4262-a3ab-3617fa0ad4c2": networkPlugin cni failed to set up pod "termination-message-container273f6b47-08fc-4262-a3ab-3617fa0ad4c2_container-runtime-7131" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 30 22:55:49.442: INFO: At 2023-01-30 22:51:14 +0000 UTC - event for termination-message-container273f6b47-08fc-4262-a3ab-3617fa0ad4c2: {kubelet ip-172-20-63-7.sa-east-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "20dd1e23325cec0c9d290e7ff4ddbeb3b6e5850fc0f3c48ba1786eb2e3212eda" network for pod "termination-message-container273f6b47-08fc-4262-a3ab-3617fa0ad4c2": networkPlugin cni failed to set up pod "termination-message-container273f6b47-08fc-4262-a3ab-3617fa0ad4c2_container-runtime-7131" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 30 22:55:49.442: INFO: At 2023-01-30 22:51:16 +0000 UTC - event for termination-message-container273f6b47-08fc-4262-a3ab-3617fa0ad4c2: {kubelet ip-172-20-63-7.sa-east-1.compute.internal} FailedCreatePodSandBox: (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "ef48ccda27527708bdddff8b093b080c584891adcba93883ca07ceb98bba06f3" network for pod "termination-message-container273f6b47-08fc-4262-a3ab-3617fa0ad4c2": networkPlugin cni failed to set up pod "termination-message-container273f6b47-08fc-4262-a3ab-3617fa0ad4c2_container-runtime-7131" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 30 22:55:49.585: INFO: POD NODE PHASE GRACE CONDITIONS Jan 30 22:55:49.585: INFO: Jan 30 22:55:49.728: INFO: Logging node info for node ip-172-20-37-244.sa-east-1.compute.internal Jan 30 22:55:49.873: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-37-244.sa-east-1.compute.internal 1be0c21f-5cd5-49c3-937b-dcb7d30e890a 10033 0 2023-01-30 22:39:08 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:sa-east-1 failure-domain.beta.kubernetes.io/zone:sa-east-1a kubelet_cleanup:true kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-37-244.sa-east-1.compute.internal kubernetes.io/os:linux kubernetes.io/role:node node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:sa-east-1a topology.hostpath.csi/node:ip-172-20-37-244.sa-east-1.compute.internal topology.kubernetes.io/region:sa-east-1 topology.kubernetes.io/zone:sa-east-1a] map[csi.volume.kubernetes.io/nodeid:{"ebs.csi.aws.com":"i-02de6750f6f07da4c"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kops-controller Update v1 2023-01-30 22:39:08 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/node":{}}}} } {kube-controller-manager Update v1 2023-01-30 22:39:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.4.0/24\"":{}}}} } {kubelet Update v1 2023-01-30 22:39:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {e2e.test Update v1 2023-01-30 22:43:41 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubelet_cleanup":{}}}} } {kubelet Update v1 2023-01-30 22:49:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.4.0/24,DoNotUseExternalID:,ProviderID:aws:///sa-east-1a/i-02de6750f6f07da4c,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{49753808896 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4051644416 0} {<nil>} 3956684Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{44778427933 0} {<nil>} 44778427933 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3946786816 0} {<nil>} 3854284Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-30 22:54:18 +0000 UTC,LastTransitionTime:2023-01-30 22:39:08 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-30 22:54:18 +0000 UTC,LastTransitionTime:2023-01-30 22:39:08 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-30 22:54:18 +0000 UTC,LastTransitionTime:2023-01-30 22:39:08 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-30 22:54:18 +0000 UTC,LastTransitionTime:2023-01-30 22:39:28 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.37.244,},NodeAddress{Type:ExternalIP,Address:54.232.162.137,},NodeAddress{Type:Hostname,Address:ip-172-20-37-244.sa-east-1.compute.internal,},NodeAddress{Type:InternalDNS,Address:ip-172-20-37-244.sa-east-1.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-54-232-162-137.sa-east-1.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec2350fb0335a8c0068ce4bddeab7362,SystemUUID:ec2350fb-0335-a8c0-068c-e4bddeab7362,BootID:80522224-50f0-4d12-bc36-a8ad10d0e9d2,KernelVersion:5.15.0-1028-aws,OSImage:Ubuntu 22.04.1 LTS,ContainerRuntimeVersion:docker://20.10.17,KubeletVersion:v1.23.16,KubeProxyVersion:v1.23.16,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5 quay.io/cilium/cilium:v1.12.5],SizeBytes:456569528,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e k8s.gcr.io/e2e-test-images/agnhost:2.39],SizeBytes:126872991,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.23.16 registry.k8s.io/kube-proxy-amd64:v1.23.16],SizeBytes:110832791,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:f0c5de192d832e7c1daa6580d4a62e8fa6fc8eabc0917ae4cb7ed4d15e95b59e registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.1],SizeBytes:94471559,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:4e74c0492bceddc598de1c90cc5bc14dcda94cb49fa9c5bad9d117c4834b5e08 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.1],SizeBytes:56379269,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:36c31f7e1f433c9634d24f876353e8646246d81a03c4e351202c2644daff1620 k8s.gcr.io/sig-storage/csi-resizer:v1.2.0],SizeBytes:53956308,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-snapshotter@sha256:ed98431376c9e944e19a465fe8ea944806714dd95416a0821096c78d66b579bd k8s.gcr.io/sig-storage/csi-snapshotter:v4.1.1],SizeBytes:53531990,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:60ab9b3e6a030d3038c87c0d6bca2930f58d1d72823e6a4af09767dc83b696a2 k8s.gcr.io/sig-storage/csi-attacher:v3.2.1],SizeBytes:53499570,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:50249337,},ContainerImage{Names:[k8s.gcr.io/sig-storage/hostpathplugin@sha256:232fe80174d60d520d36043103853a1d7ab4b7f3782cf43e45034f04ccda58ce k8s.gcr.io/sig-storage/hostpathplugin:v1.7.1],SizeBytes:29977921,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:19582112,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:2dee3fe5fe861bb66c3a4ac51114f3447a4cd35870e0f2e2b558c7a400d89589 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0],SizeBytes:18699436,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:17611980,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:529be2c9770add0cdd0c989115222ea9fc1be430c11095eb9f6dafcf98a36e2b k8s.gcr.io/sig-storage/livenessprobe:v2.4.0],SizeBytes:17175488,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:1154361,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6 registry.k8s.io/pause:3.6],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 30 22:55:49.874: INFO: Logging kubelet events for node ip-172-20-37-244.sa-east-1.compute.internal Jan 30 22:55:50.019: INFO: Logging pods the kubelet thinks is on node ip-172-20-37-244.sa-east-1.compute.internal Jan 30 22:55:50.170: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-7jbrf started at 2023-01-30 22:43:43 +0000 UTC (0+1 container statuses recorded) Jan 30 22:55:50.170: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:55:50.170: INFO: hostexec-ip-172-20-37-244.sa-east-1.compute.internal-xkzrs started at 2023-01-30 22:54:43 +0000 UTC (0+1 container statuses recorded) Jan 30 22:55:50.170: INFO: Container agnhost-container ready: true, restart count 0 Jan 30 22:55:50.170: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-tvmsg started at 2023-01-30 22:43:42 +0000 UTC (0+1 container statuses recorded) Jan 30 22:55:50.170: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:55:50.170: INFO: pod-subpath-test-preprovisionedpv-9kt4 started at 2023-01-30 22:54:31 +0000 UTC (2+1 container statuses recorded) Jan 30 22:55:50.170: INFO: Init container init-volume-preprovisionedpv-9kt4 ready: false, restart count 0 Jan 30 22:55:50.170: INFO: Init container test-init-volume-preprovisionedpv-9kt4 ready: false, restart count 0 Jan 30 22:55:50.170: INFO: Container test-container-subpath-preprovisionedpv-9kt4 ready: false, restart count 0 Jan 30 22:55:50.170: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-vpcj2 started at 2023-01-30 22:43:43 +0000 UTC (0+1 container statuses recorded) Jan 30 22:55:50.170: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:55:50.170: INFO: coredns-867df8f45c-q48mf started at 2023-01-30 22:39:48 +0000 UTC (0+1 container statuses recorded) Jan 30 22:55:50.170: INFO: Container coredns ready: true, restart count 0 Jan 30 22:55:50.170: INFO: ss2-1 started at 2023-01-30 22:43:37 +0000 UTC (0+1 container statuses recorded) Jan 30 22:55:50.170: INFO: Container webserver ready: true, restart count 0 Jan 30 22:55:50.170: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-5jrwl started at 2023-01-30 22:43:42 +0000 UTC (0+1 container statuses recorded) Jan 30 22:55:50.170: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:55:50.170: INFO: hostexec-ip-172-20-37-244.sa-east-1.compute.internal-mvnww started at 2023-01-30 22:54:13 +0000 UTC (0+1 container statuses recorded) Jan 30 22:55:50.170: INFO: Container agnhost-container ready: true, restart count 0 Jan 30 22:55:50.170: INFO: ebs-csi-node-wwnfq started at 2023-01-30 22:39:09 +0000 UTC (0+3 container statuses recorded) Jan 30 22:55:50.170: INFO: Container ebs-plugin ready: true, restart count 0 Jan 30 22:55:50.170: INFO: Container liveness-probe ready: true, restart count 0 Jan 30 22:55:50.170: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 30 22:55:50.170: INFO: netserver-0 started at 2023-01-30 22:55:14 +0000 UTC (0+1 container statuses recorded) Jan 30 22:55:50.170: INFO: Container webserver ready: false, restart count 0 Jan 30 22:55:50.170: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-jxx2t started at 2023-01-30 22:43:42 +0000 UTC (0+1 container statuses recorded) Jan 30 22:55:50.170: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:55:50.170: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-5qz9q started at 2023-01-30 22:43:42 +0000 UTC (0+1 container statuses recorded) Jan 30 22:55:50.170: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:55:50.170: INFO: pod-subpath-test-preprovisionedpv-wrbq started at 2023-01-30 22:55:00 +0000 UTC (2+1 container statuses recorded) Jan 30 22:55:50.170: INFO: Init container init-volume-preprovisionedpv-wrbq ready: false, restart count 0 Jan 30 22:55:50.170: INFO: Init container test-init-volume-preprovisionedpv-wrbq ready: false, restart count 0 Jan 30 22:55:50.171: INFO: Container test-container-subpath-preprovisionedpv-wrbq ready: false, restart count 0 Jan 30 22:55:50.171: INFO: hostexec-ip-172-20-37-244.sa-east-1.compute.internal-stbbr started at 2023-01-30 22:55:09 +0000 UTC (0+1 container statuses recorded) Jan 30 22:55:50.171: INFO: Container agnhost-container ready: true, restart count 0 Jan 30 22:55:50.171: INFO: local-injector started at 2023-01-30 22:55:15 +0000 UTC (0+1 container statuses recorded) Jan 30 22:55:50.171: INFO: Container local-injector ready: false, restart count 0 Jan 30 22:55:50.171: INFO: cilium-2kmmh started at 2023-01-30 22:39:09 +0000 UTC (1+1 container statuses recorded) Jan 30 22:55:50.171: INFO: Init container clean-cilium-state ready: true, restart count 0 Jan 30 22:55:50.171: INFO: Container cilium-agent ready: true, restart count 0 Jan 30 22:55:50.171: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-jvpvp started at 2023-01-30 22:43:42 +0000 UTC (0+1 container statuses recorded) Jan 30 22:55:50.171: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:55:50.171: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-xbf2p started at 2023-01-30 22:43:43 +0000 UTC (0+1 container statuses recorded) Jan 30 22:55:50.171: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:55:50.171: INFO: rs-hh4qw started at 2023-01-30 22:49:21 +0000 UTC (0+1 container statuses recorded) Jan 30 22:55:50.171: INFO: Container donothing ready: false, restart count 0 Jan 30 22:55:50.171: INFO: hostexec-ip-172-20-37-244.sa-east-1.compute.internal-5pprs started at 2023-01-30 22:50:34 +0000 UTC (0+1 container statuses recorded) Jan 30 22:55:50.171: INFO: Container agnhost-container ready: true, restart count 0 Jan 30 22:55:50.171: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-g7dxk started at 2023-01-30 22:43:42 +0000 UTC (0+1 container statuses recorded) Jan 30 22:55:50.171: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:55:50.171: INFO: cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da-x4sjr started at 2023-01-30 22:43:43 +0000 UTC (0+1 container statuses recorded) Jan 30 22:55:50.171: INFO: Container cleanup40-8085bd98-9753-46c6-a4fd-d1c89e00c3da ready: true, restart count 0 Jan 30 22:55:50.171: INFO: csi-hostpathplugin-0 started at 2023-01-30 22:54:32 +0000 UTC (0+7 container statuses recorded) Jan 30 22:55:50.171: INFO: Container csi-attacher ready: false, restart count 0 Jan 30 22:55:50.171: INFO: Container csi-provisioner ready: false, restart count 0 Jan 30 22:55:50.171: INFO: Container csi-resizer ready: false, restart count 0 Jan 30 22:55:50.171: INFO: Container csi-snapshotter ready: false, restart count 0 Jan 30 22:55:50.171: INFO: Container hostpath ready: false, restart count 0 Jan 30 22:55:50.171: INFO: Container liveness-probe ready: false, restart count 0 Jan 30 22:55:50.171: INFO: Container node-driver-registrar ready: false, restart count 0 Jan 30 22:55:51.166: INFO: Latency metrics for node ip-172-20-37-244.sa-east-1.compute.internal Jan 30 22:55:51.166: INFO: Logging node info for node ip-172-20-46-143.sa-east-1.compute.internal Jan 30 22:55:51.309: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-46-143.sa-east-1.compute.internal 4ac0f2fd-a06b-4650-9c4a-c2964727bf42 9940 0 2023-01-30 22:39:07 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:sa-east-1 failure-domain.beta.kubernetes.io/zone:sa-east-1a kubelet_cleanup:true kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-46-143.sa-east-1.compute.internal kubernetes.io/os:linux kubernetes.io/role:node node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:sa-east-1a topology.kubernetes.io/region:sa-east-1 topology.kubernetes.io/zone:sa-east-1a] map[csi.volume.kubernetes.io/nodeid:{"ebs.csi.aws.com":"i-0549a01609c77b117"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kops-controller Update v1 2023-01-30 22:39:07 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/node":{}}}} } {kube-controller-manager Update v1 2023-01-30 22:39:07 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.3.0/24\"":{}}}} } {kubelet Update v1 2023-01-30 22:39:07 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {e2e.test Update v1 2023-01-30 22:43:41 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubelet_cleanup":{}}}} } {kubelet Update v1 2023-01-30 22:54:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.3.0/24,DoNotUseExternalID:,ProviderID:aws:///sa-east-1a/i-0549a01609c77b117,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{49753808896 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4051652608 0} {<nil>} 3956692Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{44778427933 0} {<nil>} 44778427933 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3946795008 0} {<nil>} 3854292Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-30 22:54:09 +0000 UTC,LastTransitionTime:2023-01-30 22:39:07 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-30 22:54:09 +0000 UTC,LastTransitionTime:2023-01-30 22:39:07 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-30 22:54:09 +0000 UTC,LastTransitionTime:2023-01-30 22:39:07 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-30 22:54:09 +0000 UTC,LastTransitionTime:2023-01-30 22:39:28 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.46.143,},NodeAddress{Type:ExternalIP,Address:18.230.23.25,},NodeAddress{Type:Hostname,Address:ip-172-20-46-143.sa-east-1.compute.internal,},NodeAddress{Type:InternalDNS,Address:ip-172-20-46-143.sa-east-1.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-18-230-23-25.sa-east-1.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec25bac6007d23dab6609e76a6663500,SystemUUID:ec25bac6-007d-23da-b660-9e76a6663500,BootID:cd72b157-4d78-4df9-997f-bab559376690,KernelVersion:5.15.0-1028-aws,OSImage:Ubuntu 22.04.1 LTS,ContainerRuntimeVersion:docker://20.10.17,KubeletVersion:v1.23.16,KubeProxyVersion:v1.23.16,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5 quay.io/cilium/cilium:v1.12.5],SizeBytes:456569528,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:20f25f275d46aa728f7615a1ccc19c78b2ed89435bf943a44b339f70f45508e6 k8s.gcr.io/e2e-test-images/httpd:2.4.39-2],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e k8s.gcr.io/e2e-test-images/agnhost:2.39],SizeBytes:126872991,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.23.16 registry.k8s.io/kube-proxy-amd64:v1.23.16],SizeBytes:110832791,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:f0c5de192d832e7c1daa6580d4a62e8fa6fc8eabc0917ae4cb7ed4d15e95b59e registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.1],SizeBytes:94471559,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:19582112,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:17611980,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 k8s.gcr.io/e2e-test-images/nginx:1.14-2],SizeBytes:16032814,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:1154361,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236df